Updates from: 06/28/2024 01:20:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Manage User Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-user-data.md
This article discusses how you can manage the user data in Azure Active Directory B2C (Azure AD B2C) by using the operations that are provided by the [Microsoft Graph API](/graph/use-the-api). Managing user data includes deleting or exporting data from audit logs. ## Delete user data
active-directory-b2c Quickstart Native App Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-native-app-desktop.md
Azure Active Directory B2C (Azure AD B2C) provides cloud identity management to keep your application, business, and customers protected. Azure AD B2C enables your applications to authenticate to social accounts and enterprise accounts using open standard protocols. In this quickstart, you use a Windows Presentation Foundation (WPF) desktop application to sign in using a social identity provider and call an Azure AD B2C protected web API. ## Prerequisites
advisor Advisor Alerts Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-arm.md
Last updated 06/29/2020
This article shows you how to set up an alert for new recommendations from Azure Advisor using an Azure Resource Manager template (ARM template). Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on.
advisor Advisor Alerts Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-bicep.md
Last updated 04/26/2022
This article shows you how to set up an alert for new recommendations from Azure Advisor using Bicep. Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally select a resource group to specify the resources that you want to receive alerts on.
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
description: Full list of available performance recommendations in Advisor.
Previously updated : 3/22/2024 Last updated : 6/24/2024 # Performance recommendations
With the new Ev5 compute hardware, you can boost workload performance by 30% wit
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlComputeSeriesUpgradeEv5 (Boost your workload performance by 30% with the new Ev5 compute hardware)](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698).
+### Increase the storage limit for Hyperscale (Citus) server group
-### Scale the storage limit for PostgreSQL server
-
-Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits).
-
-### Scale the PostgreSQL server to higher SKU
-
-Our system shows that the server might be unable to support the connection requests because of the maximum supported connections for the given SKU, which might result in a large number of failed connections requests adversely affecting performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlConcurrentConnection (Scale the PostgreSQL server to higher SKU)](https://aka.ms/postgresqlconnectionlimits).
-
-### Move your PostgreSQL server to Memory Optimized SKU
-
-Our system shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlMemoryCache (Move your PostgreSQL server to Memory Optimized SKU)](https://aka.ms/postgresqlpricing).
-
-### Add a PostgreSQL Read Replica server
-
-Our system shows that you might have a read intensive workload running, which results in resource contention for this server. Resource contention can lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+Our system shows that one or more nodes in the server group might be constrained because they are approaching limits for the currently provisioned storage values. This might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
-Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](https://aka.ms/postgresqlreadreplica).
+Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscaleCitus (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes).
### Increase the PostgreSQL server vCores
-Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlCpuOverload (Increase the PostgreSQL server vCores)](https://aka.ms/postgresqlpricing).
-
-### Improve PostgreSQL connection management
-
-Our system shows that your PostgreSQL server might not be managing connections efficiently, which can result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections by configuring a server side connection-pooler, such as PgBouncer.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlConnectionPooling (Improve PostgreSQL connection management)](https://aka.ms/azure_postgresql_connection_pooling).
-
-### Improve PostgreSQL log performance
-
-Our system shows that your PostgreSQL server has been configured to output VERBOSE error logs. This setting can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend that you change the log_error_verbosity parameter to the DEFAULT setting.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlLogErrorVerbosity (Improve PostgreSQL log performance)](https://aka.ms/azure_postgresql_log_settings).
-
-### Optimize query statistics collection on an Azure Database for PostgreSQL
-
-Our system shows that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](https://aka.ms/azure_postgresql_optimize_query_stats).
-
-### Optimize query store on an Azure Database for PostgreSQL when not troubleshooting
-
-Our system shows that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](https://aka.ms/azure_postgresql_query_store).
-
-### Increase the storage limit for PostgreSQL Flexible Server
-
-Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode.
+Over 7 days, CPU usage was at least one of the following: Above 90% for 2 or more hours, above 50% for 50% of the time, at max usage for 20% of the time. High CPU utilization can lead to slow query performance. To improve performance, we recommend moving your server to a larger SKU with higher compute.
+Learn more about [Azure Database for PostgreSQL flexible server - Upscale Server SKU for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-compute).
-Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](https://aka.ms/azure_postgresql_flexible_server_limits).
-
-#### Optimize logging settings by setting LoggingCollector to -1
-
-Optimize logging settings by setting LoggingCollector to -1
-
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
-
-#### Optimize logging settings by setting LogDuration to OFF
-
-Optimize logging settings by setting LogDuration to OFF
-
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
-
-#### Optimize logging settings by setting LogStatement to NONE
-
-Optimize logging settings by setting LogStatement to NONE
-
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
-
-#### Optimize logging settings by setting ReplaceParameter to OFF
-
-Optimize logging settings by setting ReplaceParameter to OFF
-
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
-
-#### Optimize logging settings by setting LoggingCollector to OFF
+### Optimize log_statement settings for PostgreSQL on Azure Database
-Optimize logging settings by setting LoggingCollector to OFF
+Our system shows that you have log_statement enabled, for better performance set it to NONE
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
+Learn more about [Azure Database for PostgreSQL flexible server - Optimize log_statement settings for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-logging.md).
-### Increase the storage limit for Hyperscale (Citus) server group
+### Optimize log_duration settings for PostgreSQL on Azure Database
-Our system shows that one or more nodes in the server group might be constrained because they are approaching limits for the currently provisioned storage values. This might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
+You may experience potential performance degradation due to logging settings. To optimize these settings, set the log_duration server parameter to OFF.
-Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscaleCitus (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes).
+Learn more about [Learn more about Azure Database for PostgreSQL flexible server - Optimize log_duration settings for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-logging.md).
-### Optimize log_statement settings for PostgreSQL on Azure Database
+### Optimize log_min_duration settings for PostgreSQL on Azure Database
-Our system shows that you have log_statement enabled, for better performance set it to NONE
+Your log_min_duration server parameter is set to less than 60,000 ms (1 minute), which can lead to potential performance degradation. You can optimize logging settings by setting the log_min_duration_statement parameter to -1.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogStatement (Optimize log_statement settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
+Learn more about [Azure Database for PostgreSQL flexible server - Optimize log_min_duration settings for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-logging.md).
-### Increase the work_mem to avoid excessive disk spilling from sort and hash
+### Optimize log_error_verbosity settings for PostgreSQL on Azure Database
-Our system shows that the configuration work_mem is too small for your PostgreSQL server, resulting in disk spilling and degraded query performance. We recommend increasing the work_mem limit for the server, which helps to reduce the scenarios when the sort or hash happens on disk and improves the overall query performance.
+Your server has been configured to output VERBOSE error logs. This can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend changing the log_error_verbosity server parameter to the DEFAULT setting.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration).
+Learn more about [Learn more about Azure Database for PostgreSQL flexible server - Optimize log_error_verbosity settings for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-logging.md).
-### Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning
+### Identify if checkpoints are happening too often to improve PostgreSQL - Flexible Server performance
-Our system suggests that you can improve storage performance by enabling Intelligent tuning
+Your sever is encountering checkpoints frequently. To resolve the issue, we recommend increasing your max_wal_size server parameter.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruIntelligentTuning (Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning)](../postgresql/flexible-server/concepts-intelligent-tuning.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Increase max_wal_size](/azure/postgresql/flexible-server/server-parameters-table-write-ahead-logcheckpoints?pivots=postgresql-16#max_wal_size).
-### Optimize log_duration settings for PostgreSQL on Azure Database
+### Identify inactive Logical Replication Slots to improve PostgreSQL - Flexible Server performance
-Our system shows that you have log_duration enabled, for better performance, set it to OFF
+Your server may have inactive logical replication slots which can result in degraded server performance and availability. We recommend deleting inactive replication slots or consuming the changes from the slots so the Log Sequence Number (LSN) advances to closer to the current LSN of the server.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogDuration (Optimize log_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Unused/inactive Logical Replication Slots](/azure/postgresql/flexible-server/how-to-autovacuum-tuning#unused-replication-slots).
-### Optimize log_min_duration settings for PostgreSQL on Azure Database
+### Identify long-running transactions to improve PostgreSQL - Flexible Server performance
-Our system shows that you have log_min_duration enabled, for better performance, set it to -1
+There are transactions running for more than 24 hours. Review the High CPU Usage-> Long Running Transactions section in the troubleshooting guides to identify and mitigate the issue.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogMinDuration (Optimize log_min_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Long Running transactions using Troubleshooting guides](/azure/postgresql/flexible-server/how-to-troubleshooting-guides).
-### Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database
+### Identify Orphaned Prepared transactions to improve PostgreSQL - Flexible Server performance
-Our system shows that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE
+There are orphaned prepared transactions. Rollback/Commit the prepared transaction. The recommendations are shared in Autovacuum Blockers -> Autovacuum Blockers section in the troubleshooting guides.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruQueryCaptureMode (Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-query-store-best-practices.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Orphaned Prepared transactions using Troubleshooting guides](/azure/postgresql/flexible-server/how-to-troubleshooting-guides).
-### Optimize PostgreSQL performance by enabling PGBouncer
+### Identify Transaction Wraparound to improve PostgreSQL - Flexible Server performance
-Our system shows that you can improve PostgreSQL performance by enabling PGBouncer
+The server has crossed the 50% wraparound limit, having 1 billion transactions. Refer to the recommendations shared in the Autovacuum Blockers -> Emergency AutoVacuum and Wraparound section of the troubleshooting guides.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruOrcasPostgreSQLConnectionPooling (Optimize PostgreSQL performance by enabling PGBouncer)](../postgresql/flexible-server/concepts-pgbouncer.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Transaction Wraparound using Troubleshooting guides](/azure/postgresql/flexible-server/how-to-troubleshooting-guides).
-### Optimize log_error_verbosity settings for PostgreSQL on Azure Database
+### Identify High Bloat Ratio to improve PostgreSQL - Flexible Server performance
-Our system shows that you have log_error_verbosity enabled, for better performance, set it to DEFAULT
+The server has a bloat_ratio (dead tuples/ (live tuples + dead tuples) > 80%). Refer to the recommendations shared in the Autovacuum Monitoring section of the troubleshooting guides.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogErrorVerbosity (Optimize log_error_verbosity settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô High Bloat Ratio using Troubleshooting guides](/azure/postgresql/flexible-server/how-to-troubleshooting-guides).
### Increase the storage limit for Hyperscale (Citus) server group
Learn more about [Hyperscale (Citus) server group - MarlinStorageLimitRecommenda
### Migrate your database from SSPG to FSPG
-Consider our new offering, Azure Database for PostgreSQL Flexible Server, which provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience.
-
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](../postgresql/how-to-upgrade-using-dump-and-restore.md).
-
-### Move your PostgreSQL Flexible Server to Memory Optimized SKU
-
-Our system shows that there is high churn in the buffer pool for this server, resulting in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+Consider our new offering, Azure Database for PostgreSQL Flexible Server, which provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience.
-Learn more about [PostgreSQL server - OrcasMeruMemoryUpsell (Move your PostgreSQL Flexible Server to Memory Optimized SKU)](https://aka.ms/azure_postgresql_flexible_server_pricing).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](/azure/postgresql/how-to-upgrade-using-dump-and-restore).
### Improve your Cache and application performance when running with high network bandwidth
ai-services Luis Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-traffic-manager.md
The client-application has to manage the traffic across the keys. LUIS doesn't d
This article explains how to manage the traffic across keys with Azure [Traffic Manager][traffic-manager-marketing]. You must already have a trained and published LUIS app. If you do not have one, follow the Prebuilt domain [quickstart](luis-get-started-create-app.md). ## Connect to PowerShell in the Azure portal In the [Azure portal](https://portal.azure.com), open the PowerShell window. The icon for the PowerShell window is the **>_** in the top navigation bar. By using PowerShell from the portal, you get the latest PowerShell version and you are authenticated. PowerShell in the portal requires an [Azure Storage](https://azure.microsoft.com/services/storage/) account.
ai-services Luis User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-user-privacy.md
Delete customer data to ensure privacy and compliance.
## Summary of customer data request featuresΓÇï Language Understanding Intelligent Service (LUIS) preserves customer content to operate the service, but the LUIS user has full control over viewing, exporting, and deleting their data. This can be done through the LUIS web [portal](luis-reference-regions.md) or the [LUIS Authoring (also known as Programmatic) APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true). Customer content is stored encrypted in Microsoft regional Azure storage and includes:
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
An application that accesses an Azure AI services resource when network rules ar
> > Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services. ## Scenarios
ai-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md
Previously updated : 08/29/2023 Last updated : 06/26/2024 keywords: on-premises, OCR, Docker, container # Install Azure AI Vision 3.2 GA Read OCR container
-Containers enable you to run the Azure AI Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run the Read (OCR) container.
+Containers let you run the Azure AI Vision APIs in your own environment and can help you meet specific security and data governance requirements. In this article you'll learn how to download, install, and run the Azure AI Vision Read (OCR) container.
-The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
+The Read container allows you to extract printed and handwritten text from images and documents in JPEG, PNG, BMP, PDF, and TIFF file formats. For more information on the Read service, see the [Read API how-to guide](how-to/call-read-api.md).
## What's new The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you're an existing customer, follow the [download instructions](#get-the-container-image) to get started.
The Read 3.2 OCR container is the latest GA model and provides:
* Choose text line output order from default to a more natural reading order for Latin languages only. * Text line classification as handwritten style or not for Latin languages only.
-If you're using Read 2.0 containers today, see the [migration guide](read-container-migration-guide.md) to learn about changes in the new versions.
+If you're using the Read 2.0 container today, see the [migration guide](read-container-migration-guide.md) to learn about changes in the new versions.
## Prerequisites
ai-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-ocr.md
Previously updated : 04/30/2024 Last updated : 06/26/2024
OCR or Optical Character Recognition is also referred to as text recognition or text extraction. Machine-learning-based OCR techniques allow you to extract printed or handwritten text from images such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. The text is typically extracted as words, text lines, and paragraphs or text blocks, enabling access to digital version of the scanned text. This eliminates or significantly reduces the need for manual data entry.
-## How is OCR related to Intelligent Document Processing (IDP)?
-Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Document Intelligence](../../ai-services/document-intelligence/overview.md). Document Intelligence includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Document Intelligence Read OCR](../../ai-services/document-intelligence/concept-read.md).
## OCR engine
-Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). It can extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
+Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). It can extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. It's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
> [!WARNING] > The Azure AI Vision legacy [OCR API in v3.2](/rest/api/computervision/recognize-printed-text?view=rest-computervision-v3.2) and [RecognizeText API in v2.1](/rest/api/computervision/recognize-printed-text/recognize-printed-text?view=rest-computervision-v2.1) operations are not recommended for use. [!INCLUDE [read-editions](includes/read-editions.md)]
+## How is OCR related to Intelligent Document Processing (IDP)?
+
+Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Document Intelligence](../../ai-services/document-intelligence/overview.md). Document Intelligence includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Document Intelligence Read OCR](../../ai-services/document-intelligence/concept-read.md).
+ ## How to use OCR Try out OCR by using Vision Studio. Then follow one of the links to the Read edition that best meet your requirements.
ai-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/client-library.md
Previously updated : 08/07/2023 Last updated : 06/26/2024 ms.devlang: csharp # ms.devlang: csharp, golang, java, javascript, python
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/export-delete-data.md
Content Moderator collects user data to operate the service, but customers have full control to view, export, and delete their data using the [Moderation APIs](./api-reference.md). For more information on how to export and delete user data in Content Moderator, see the following table.
ai-services Create Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-bicep.md
Follow this quickstart to create Azure AI services resource using Bicep.
Azure AI services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure AI services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze. ## Things to consider
ai-services Create Account Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-resource-manager-template.md
By creating an Azure AI services resource, you can:
* Access multiple AI services in Azure with a single key and endpoint. * Consolidate billing from the services that you use. ## Prerequisites
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-delete-data.md
Custom Vision collects user data to operate the service, but customers have full control to viewing and delete their data using the Custom Vision [Training APIs](https://go.microsoft.com/fwlink/?linkid=865446). To learn how to view or delete different kinds of user data in Custom Vision, see the following table:
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md
[!INCLUDE [availability](includes/regional-availability.md)]
-Summarization is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
+Summarization is one feature offered by [Azure AI Language](../overview.md), which is a combination of generative Large Language models and task-optimized encoder models that offer summarization solutions with higher quality, cost efficiency, and lower latency.
+Use this article to learn more about this feature, and how to use it in your applications.
-Though the services are labeled document and conversation summarization, text summarization only accepts plain text blocks, and conversation summarization accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use text summarization for that scenario.
+Out of the box, the service provides summarization solutions for three types of genre, plain texts, conversations, and native documents. Text summarization only accepts plain text blocks, and conversation summarization accept conversational input, including various speech audio signals in order for the model to effectively segment and summarize, and native document can directly summarize for documents in their native formats, such as Words, PDF, etc.
# [Text summarization](#tab/text-summarization)
This documentation contains the following article types:
* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=text-summarization)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Text summarization uses natural language processing techniques to generate a summary for documents. There are two supported API approaches to automatic summarization: extractive and abstractive.
-
-Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
+These features are designed to shorten content that could be considered too long to read.
## Key features for text summarization
-There are two aspects of text summarization this API provides:
+Text summarization uses natural language processing techniques to generate a summary for plain texts, which can be from a document or a conversation, or any texts. There are two approaches of summarization this API provides:
-* [**Extractive summarization**](how-to/document-summarization.md#try-text-extractive-summarization): Produces a summary by extracting salient sentences within the document.
+* [**Extractive summarization**](how-to/document-summarization.md#try-text-extractive-summarization): Produces a summary by extracting salient sentences within the document, together the positioning information of these sentences.
* Multiple extracted sentences: These sentences collectively convey the main idea of the document. They're original sentences extracted from the input document's content.
- * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
- * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences.
+ * Rank score: The rank score indicates how relevant a sentence is to the main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+ For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences.
* Positional information: The start position and length of extracted sentences.
-* [**Abstractive summarization**](how-to/document-summarization.md#try-text-abstractive-summarization): Generates a summary that doesn't use the same words as in the document, but captures the main idea.
- * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document can be segmented so multiple groups of summary texts can be returned with their contextual input range.
- * Contextual input range: The range within the input document that was used to generate the summary text.
+* [**Abstractive summarization**](how-to/document-summarization.md#try-text-abstractive-summarization): Generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document.
+ * Summary texts: Abstractive summarization returns a summary for each contextual input range. A long input can be segmented so multiple groups of summary texts can be returned with their contextual input range.
+ * Contextual input range: The range within the input that was used to generate the summary text.
As an example, consider the following paragraph of text:
As an example, consider the following paragraph of text:
The text summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API is returned. The output is available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response can contain text offsets. For more information, see [how to process offsets](../concepts/multilingual-emoji-support.md).
-If we use the above example, the API might return these summarized sentences:
+If we use the above example, the API might return these summaries:
**Extractive summarization**: - "At Microsoft, we are on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."
This documentation contains the following article types:
Conversation summarization supports the following features:
-* [**Issue/resolution summarization**](how-to/conversation-summarization.md#get-summaries-from-text-chats): A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers.
+* [**Recap**](how-to/conversation-summarization.md#get-recap-and-follow-up-task-summarization): Summarizes a conversation into a brief paragraph.
+* [**Issue/resolution summarization**](quickstart.md?tabs=conversation-summarization%2Cwindows&pivots=rest-api#conversation-issue-and-resolution-summarization): Call center specific features that give a summary of issues and resolutions in conversations between customer-service agents and your customers.
* [**Chapter title summarization**](how-to/conversation-summarization.md#get-chapter-titles): Segments a conversation into chapters based on the topics discussed in the conversation, and gives suggested chapter titles of the input conversation.
-* [**Recap**](how-to/conversation-summarization.md#get-narrative-summarization): Summarizes a conversation into a brief paragraph.
* [**Narrative summarization**](how-to/conversation-summarization.md#get-narrative-summarization): Generates detail call notes, meeting notes or chat summaries of the input conversation.
-* [**Follow-up tasks**](how-to/conversation-summarization.md#get-narrative-summarization): Gives a list of follow-up tasks discussed in the input conversation.
As an example, consider the following example conversation:
As an example, consider the following example conversation:
Conversation summarization feature would simplify the text as follows:
-|Example summary | Format | Conversation aspect |
+|Example summary | Remark | Conversation aspect |
||-|-|
-| Customer wants to use the wifi connection on their Smart Brew 300. But it didn't work. | One or two sentences | issue |
-| Checked if the power light is blinking slowly. Checked the Contoso coffee app. It had no prompt. Tried to do a factory reset. | One or more sentences, generated from multiple lines of the transcript. | resolution |
+| Customer is unable to set up wifi connection for Smart Brew 300 espresso machine | a customer issue in a customer-and-agent conversation | issue |
+| The agent suggested several troubleshooting steps, including checking the wifi connection, checking the Contoso Coffee app, and performing a factory reset. However, none of these steps resolved the issue. The agent then put the customer on hold to look for another solution. | solutions tried in a customer-and-agent conversation | resolution |
+| The customer contacted the agent for assistance with setting up a wifi connection for their Smart Brew 300 espresso machine. The agent guided the customer through several troubleshooting steps, including a wifi connection check, checking the power light, and a factory reset. Despite following these steps, the issue persisted. The agent then decided to explore other potential solutions | Summarizes a conversation into one paragraph | recap |
+| Troubleshooting SmartBrew 300 Espresso Machine | Segments a conversation and generates a title for each segment; usually cowork with `narrative` aspect | chapterTitle
+| The customer is having trouble setting up a wifi connection for their Smart Brew 300 espresso machine. The agent suggests several solutions, including a factory reset, but the issue persists. | Segments a conversation and generates a summary for each segment, usually cowork with `chapterTitle` aspect | narrative
-# [Document summarization](#tab/document-summarization)
+# [Document summarization (Preview)](#tab/document-summarization)
This documentation contains the following article types:
-* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=text-summarization)** are getting-started instructions to guide you through making requests to the service.
+* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service.
* **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Document summarization uses natural language processing techniques to generate a summary for documents. There are two supported API approaches to automatic summarization: extractive and abstractive.
+Document summarization uses natural language processing techniques to generate a summary for documents.
+
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for two types of summarization:
+* **Extractive summarization**: Produces a summary by extracting salient sentences within the document, together the positioning information of those sentences.
+
+ * Multiple extracted sentences: These sentences collectively convey the main idea of the document. They're original sentences extracted from the input document's content.
+ * Rank score: The rank score indicates how relevant a sentence is to the main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+ For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences.
+ * Positional information: The start position and length of extracted sentences.
+
+* **Abstractive summarization**: Generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document.
+ * Summary texts: Abstractive summarization returns a summary for each contextual input range. A long input can be segmented so multiple groups of summary texts can be returned with their contextual input range.
+ * Contextual input range: The range within the input that was used to generate the summary text.
-A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for both [**AbstractiveSummarization**](../summarization/how-to/document-summarization.md#try-text-abstractive-summarization) and [**ExtractiveSummarization**](../summarization/how-to/document-summarization.md#try-text-extractive-summarization) capabilities.
- Currently **Text Summarization** supports the following native document formats:
+ Currently **Document Summarization** supports the following native document formats:
|File type|File extension|Description| ||--|--|
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 06/19/2024 Last updated : 06/25/2024
The following Embeddings models are available with [Azure Government](/azure/azu
For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md).
-| Region | `gpt-35-turbo (0613)` | `gpt-35-turbo (1106)`| `fine tuned gpt-3.5-turbo-0125` | `gpt-4 (0613)` | `gpt-4 (1106)` | `gpt-4 (0125)` |
-|--|||||||
-| Australia East | ✅ | ✅ | | ✅ |✅ | |
-| East US | ✅ | | | | | ✅ |
-| East US 2 | ✅ | | ✅ | ✅ |✅ | |
-| France Central | ✅ | ✅ | | ✅ |✅ | |
-| Japan East | ✅ | | | | | |
-| Norway East | | | | | ✅ | |
-| Sweden Central | ✅ |✅ | ✅ |✅ |✅| |
-| UK South | ✅ | ✅ | | | ✅ | ✅ |
-| West US | | ✅ | | | ✅ | |
-| West US 3 | | | | |✅ | |
+| Region | `gpt-35-turbo (0613)` | `gpt-35-turbo (1106)`| `fine tuned gpt-3.5-turbo-0125` | `gpt-4 (0613)` | `gpt-4 (1106)` | `gpt-4 (0125)` | `gpt-4o (2024-05-13)` |
+|--|::|::|::|::|::|::|::|
+| Australia East | ✅ | ✅ | | ✅ |✅ | | |
+| East US | ✅ | | | | | ✅ | ✅ |
+| East US 2 | ✅ | | ✅ | ✅ |✅ | |✅|
+| France Central | ✅ | ✅ | | ✅ |✅ | | |
+| Japan East | ✅ | | | | | | |
+| Norway East | | | | | ✅ | | |
+| Sweden Central | ✅ |✅ | ✅ |✅ |✅| |✅|
+| UK South | ✅ | ✅ | | | ✅ | ✅ | |
+| West US | | ✅ | | | ✅ | |✅|
+| West US 3 | | | | |✅ | |✅|
## Model retirement
ai-services Dynamic Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/dynamic-quota.md
Previously updated : 01/30/2024 Last updated : 06/27/2024
For dynamic quota, consider scenarios such as:
### When does dynamic quota come into effect?
-The Azure OpenAI backend decides if, when, and how much extra dynamic quota is added or removed from different deployments. It isn't forecasted or announced in advance, and isn't predictable. Azure OpenAI lets your application know there's more quota available by responding with an HTTP 429 and not letting more API calls through. To take advantage of dynamic quota, your application code must be able to issue more requests as HTTP 429 responses become infrequent.
+The Azure OpenAI backend decides if, when, and how much extra dynamic quota is added or removed from different deployments. It isn't forecasted or announced in advance, and isn't predictable. To take advantage of dynamic quota, your application code must be able to issue more requests as HTTP 429 responses become infrequent. Azure OpenAI lets your application know when you've hit your quota limit by responding with an HTTP 429 and not letting more API calls through.
### How does dynamic quota change costs?
ai-services Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md
client = AzureOpenAI(
) response = client.chat.completions.create(
- model="gpt-35-turbo", # model = "deployment_name".
+ model="gpt-35-turbo", # model = "deployment_name"
messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the
# Send a completion call to generate an answer print('Sending a test completion job') start_phrase = 'Write a tagline for an ice cream shop. '
-response = client.completions.create(model=deployment_name, prompt=start_phrase, max_tokens=10)
+response = client.completions.create(model=deployment_name, prompt=start_phrase, max_tokens=10) # model = "deployment_name"
print(response.choices[0].text) ```
async def main():
api_version = "2024-02-01", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
- response = await client.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Hello world"}])
+ response = await client.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Hello world"}]) # model = model deployment name
print(response.model_dump_json(indent=2))
client = AzureOpenAI(
) completion = client.chat.completions.create(
- model="deployment-name", # gpt-35-instant
+ model="deployment-name", # model = "deployment_name"
messages=[ { "role": "user",
client = openai.AzureOpenAI(
) completion = client.chat.completions.create(
- model=deployment,
+ model=deployment, # model = "deployment_name"
messages=[ { "role": "user",
ai-services Provisioned Throughput Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md
Title: Azure OpenAI Service Provisioned Throughput Units (PTU) onboarding
description: Learn about provisioned throughput units onboarding and Azure OpenAI. Previously updated : 05/02/2024 Last updated : 06/25/2024
The **Provisioned** option and the capacity planner are only available in certai
||| |Model | OpenAI model you plan to use. For example: GPT-4 | | Version | Version of the model you plan to use, for example 0614 |
-| Prompt tokens | Number of tokens in the prompt for each call |
-| Generation tokens | Number of tokens generated by the model on each call |
-| Peak calls per minute | Peak concurrent load to the endpoint measured in calls per minute|
+| Peak calls per min | The number of calls per minute that are expected to be sent to the model |
+| Tokens in prompt call | The number of tokens in the prompt for each call to the model. Calls with larger prompts will utilize more of the PTU deployment. Currently this calculator assumes a single prompt value so for workloads with wide variance, we recommend benchmarking your deployment on your traffic to determine the most accurate estimate of PTU needed for your deployment. |
+| Tokens in model response | The number of tokens generated from each call to the model. Calls with larger generation sizes will utilize more of the PTU deployment. Currently this calculator assumes a single prompt value so for workloads with wide variance, we recommend benchmarking your deployment on your traffic to determine the most accurate estimate of PTU needed for your deployment. |
-After you fill in the required details, select **Calculate** to view the suggested PTU for your scenario.
+After you fill in the required details, select **Calculate** button in the output column.
+
+The values in the output column are the estimated value of PTU units required for the provided workload inputs. The first output value represents the estimated PTU units required for the workload, rounded to the nearest PTU scale increment. The second output value represents the raw estimated PTU units required for the workload. The token totals are calculated using the following equation: `Total = Peak calls per minute * (Tokens in prompt call + Tokens in model response)`.
:::image type="content" source="../media/how-to/provisioned-onboarding/capacity-calculator.png" alt-text="Screenshot of the Azure OpenAI Studio landing page." lightbox="../media/how-to/provisioned-onboarding/capacity-calculator.png":::
ai-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-batch-processing.md
The batch kit container is available for free on [GitHub](https://github.com/mic
Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download the latest batch kit container. ```bash docker pull docker.io/batchkit/speech-batch-kit:latest
aks App Routing Dns Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-dns-ssl.md
The application routing add-on with nginx delivers the following:
- Azure Key Vault if you want to configure SSL termination and store certificates in the vault hosted in Azure. - Azure DNS if you want to configure global and private zone management and host them in Azure. - To attach an Azure Key Vault or Azure DNS Zone, you need the [Owner][rbac-owner], [Azure account administrator][rbac-classic], or [Azure co-administrator][rbac-classic] role on your Azure subscription.
+- All public DNS Zones must be in the same subscription and Resource Group.
## Connect to your AKS cluster
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
You can customize CoreDNS with AKS to perform on-the-fly DNS name rewrites.
log errors rewrite stop {
- name regex (.*)\.<domain to be rewritten>.com {1}.default.svc.cluster.local
+ name regex (.*)\.<domain to be rewritten>\.com {1}.default.svc.cluster.local
answer name (.*)\.default\.svc\.cluster\.local {1}.<domain to be rewritten>.com } forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name
aks Edge Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/edge-zones.md
az aks create \
In this section you'll learn how to deploy a Kubernetes cluster in the Edge Zone. 1. Sign in to the [Azure portal](https://portal.azure.com).
aks Eks Edw Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-deploy.md
+
+ Title: Deploy AWS event-driven workflow (EDW) workload to Azure
+description: Learn how to deploy an AWS EDW workflow to Azure and how to validate your deployment.
+ Last updated : 06/20/2024++++
+# Deploy an AWS event-driven workflow (EDW) workload to Azure
+
+In this article, you will deploy an [AWS EDW workload][eks-edw-overview] to Azure.
+
+## Sign in to Azure
+
+1. Sign in to Azure using the [`az login`][az-login] command.
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. If your Azure account has multiple subscriptions, make sure to select the correct subscription. List the names and IDs of your subscriptions using the [`az account list`][az-account-list] command.
+
+ ```azurecli-interactive
+ az account list --query "[].{id: id, name:name }" --output table
+ ```
+
+1. Select a specific subscription using the [`az account set`][az-account-set] command.
+
+ ```azurecli-interactive
+ az account set --subscription $subscriptionId
+ ```
+
+## EDW workload deployment script
+
+You use the `deploy.sh` script in the `deployment` directory of the [GitHub repository][github-repo] to deploy the application to Azure.
+
+The script first checks that all of the [prerequisite tools][prerequisites] are installed. If not, the script terminates and displays an error message letting you know which prerequisites are missing. If this happens, review the prerequisites, install any missing tools, and then run the script again. The [Node autoprovisioning (NAP) for AKS][nap-aks] feature flag must be registered on your Azure subscription. If it isn't already registered, the script executes an Azure CLI command to register the feature flag.
+
+The script records the state of the deployment in a file called `deploy.state`, which is located in the `deployment` directory. You can use this file to set environment variables when deploying the app.
+
+As the script executes the commands to configure the infrastructure for the workflow, it checks that each command executes successfully. If any issues occur, an error message is displayed, and the execution stops.
+
+The script displays a log as it runs. You can persist the log by redirecting the log information output and saving it to the `install.log` file in the `logs` directory using the following command:
+
+```bash
+./deployment/infra/deploy.sh | tee ./logs/install.log
+```
+
+For more information, see the `./deployment/infra/deploy.sh` script in our [GitHub repository][github-repo].
+
+### Workload resources
+
+The deployment script creates the following Azure resources:
+
+- **Azure resource group**: The [Azure resource group][azure-resource-group] that stores the resources created by the deployment script.
+- **Azure Storage account**: The Azure Storage account that contains the queue where messages are sent by the producer app and read by the consumer app, and the table where the consumer app stores the processed messages.
+- **Azure container registry**: The container registry provides a repository for the container that deploys the refactored consumer app code.
+- **Azure Kubernetes Service (AKS) cluster**: The AKS cluster provides Kubernetes orchestration for the consumer app container and has the following features enabled:
+
+ - **Node autoprovisioning (NAP)**: The implementation of the [Karpenter](https://karpenter.sh) node autoscaler on AKS.
+ - **Kubernetes Event-driven Autoscaling (KEDA)**: [KEDA](https://keda.sh) enables pod scaling based on events, such as exceeding a specified queue depth threshold.
+ - **Workload identity**: Allows you to attach role-based access policies to pod identities for enhanced security.
+ - **Attached Azure container registry**: This feature allows the AKS cluster to pull images from repositories on the specified ACR instance.
+
+- **Application and system node pool**: The script also creates an application and system node pool in the AKS cluster that has a taint to prevent application pods from being scheduled in the system node pool.
+- **AKS cluster managed identity**: The script assigns the `acrPull` role to this managed identity, which facilitates access to the attached Azure container registry for pulling images.
+- **Workload identity**: The script assigns the **Storage Queue Data Contributor** and **Storage Table Data Contributor** roles to provide role-based access control (RBAC) access to this managed identity, which is associated with the Kubernetes service account used as the identity for pods on which the consumer app containers are deployed.
+- **Two federated credentials**: One credential enables the managed identity to implement pod identity, and the other credential is used for the KEDA operator service account to provide access to the KEDA scaler to gather the metrics needed to control pod autoscaling.
+
+## Deploy the EDW workload to Azure
+
+- Make sure you're in the `deployment` directory of the project and deploy the workload using the following commands:
+
+ ```bash
+ cd deployment
+ ./deploy.sh
+ ```
+
+## Validate deployment and run the workload
+
+Once the deployment script completes, you can deploy the workload on the AKS cluster.
+
+1. Set the source for gathering and updating the environment variables for `./deployment/environmentVariables.sh` using the following command:
+
+ ```bash
+ source ./deployment/environmentVariables.sh
+ ```
+
+1. You need the information in the `./deployment/deploy.state` file to set environment variables for the names of the resources created in the deployment. Display the contents of the file using the following `cat` command:
+
+ ```bash
+ cat ./deployment/deploy.state
+ ```
+
+ Your output should show the following variables:
+
+ ```output
+ SUFFIX=
+ RESOURCE_GROUP=
+ AZURE_STORAGE_ACCOUNT_NAME=
+ AZURE_QUEUE_NAME=
+ AZURE_COSMOSDB_TABLE=
+ AZURE_CONTAINER_REGISTRY_NAME=
+ AKS_MANAGED_IDENTITY_NAME=
+ AKS_CLUSTER_NAME=
+ WORKLOAD_MANAGED_IDENTITY_NAME=
+ SERVICE_ACCOUNT=
+ FEDERATED_IDENTITY_CREDENTIAL_NAME=
+ KEDA_SERVICE_ACCT_CRED_NAME=
+ ```
+
+1. Read the file and create environment variables for the names of the Azure resources created by the deployment script using the following commands:
+
+ ```bash
+ while IFS= read -r; line do \
+ echo "export $line" \
+ export $line; \
+ done < ./deployment/deploy.state
+ ```
+
+1. Get the AKS cluster credentials using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME
+ ```
+
+1. Verify that the KEDA operator pods are running in the `kube-system` namespace on the AKS cluster using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get pods --namespace kube-system | grep keda
+ ```
+
+ Your output should look similar to the following example output:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-keda-response.png" alt-text="Screenshot showing an example response from the command to verify that KEDA operator pods are running.":::
+
+## Generate simulated load
+
+Now, you generate simulated load using the producer app to populate the queue with messages.
+
+1. In a separate terminal window, navigate to the project directory.
+1. Set the environment variables using the steps in the [previous section](#validate-deployment-and-run-the-workload). 1. Run the producer app using the following command:
+
+ ```python
+ python3 ./app/keda/aqs-producer.py
+ ```
+
+1. Once the app starts sending messages, switch back to the other terminal window.
+1. Deploy the consumer app container onto the AKS cluster using the following commands:
+
+ ```bash
+ chmod +x ./deployment/keda/deploy-keda-app-workload-id.sh
+ ./deployment/keda/deploy-keda-app-workload-id.sh
+ ```
+
+ The deployment script (`deploy-keda-app-workload-id.sh`) performs templating on the application manifest YAML specification to pass environment variables to the pod. Review the following excerpt from this script:
+
+ ```bash
+ cat <<EOF | kubectl apply -f -
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: $AQS_TARGET_DEPLOYMENT
+ namespace: $AQS_TARGET_NAMESPACE
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aqs-reader
+ template:
+ metadata:
+ labels:
+ app: aqs-reader
+ azure.workload.identity/use: "true"
+ spec:
+ serviceAccountName: $SERVICE_ACCOUNT
+ containers:
+ - name: keda-queue-reader
+ image: ${AZURE_CONTAINER_REGISTRY_NAME}.azurecr.io/aws2azure/aqs-consumer
+ imagePullPolicy: Always
+ env:
+ - name: AZURE_QUEUE_NAME
+ value: $AZURE_QUEUE_NAME
+ - name: AZURE_STORAGE_ACCOUNT_NAME
+ value: $AZURE_STORAGE_ACCOUNT_NAME
+ - name: AZURE_TABLE_NAME
+ value: $AZURE_TABLE_NAME
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
+ EOF
+ ```
+
+ The `azure.workload.identity/use` label in the `spec/template` section is the pod template for the deployment. Setting the label to `true` specifies that you're using workload identity. The `serviceAccountName` in the pod specification specifies the Kubernetes service account to associate with the workload identity. While the pod specification contains a reference for an image in a private repository, there's no `imagePullSecret` specified.
+
+1. Verify that the script ran successfully using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get pods --namespace $AQS_TARGET_NAMESPACE
+ ```
+
+ You should see a single pod in the output.
+
+## Monitor scale out for pods and nodes with k9s
+
+You can use various tools to verify the operation of apps deployed to AKS, including the Azure portal and k9s. For more information on k9s, see the [k9s overview][k9s].
+
+1. Install k9s on your AKS cluster using the appropriate guidance for your environment in the [k9s installation overview][k9s-install].
+1. Create two windows, one with a view of the pods and the other with a view of the nodes in the namespace you specified in the `AQS_TARGET_NAMESPACE` environment variable (default value is `aqs-demo`) and start k9s in each window.
+
+ You should see something similar to the following:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-k9s-view.png" lightbox="media/eks-edw-deploy/sample-k9s-view.png" alt-text="Screenshot showing an example of the K9s view across two windows.":::
+
+1. After you confirm that the consumer app container is installed and running on the AKS cluster, install the `ScaledObject` and trigger authentication used by KEDA for pod autoscaling by running the scaled object installation script (`keda-scaleobject-workload-id.sh`). using the following commands:
+
+ ```bash
+ chmod +x ./deployment/keda/keda-scaleobject-workload-id.sh
+ ./deployment/keda/keda-scaleobject-workload-id.sh
+ ```
+
+ The script also performs templating to inject environment variables where needed. Review the following excerpt from this script:
+
+ ```bash
+ cat <<EOF | kubectl apply -f -
+ apiVersion: keda.sh/v1alpha1
+ kind: ScaledObject
+ metadata:
+ name: aws2az-queue-scaleobj
+ namespace: ${AQS_TARGET_NAMESPACE}
+ spec:
+ scaleTargetRef:
+ name: ${AQS_TARGET_DEPLOYMENT} #K8s deployement to target
+ minReplicaCount: 0 # We don't want pods if the queue is empty nginx-deployment
+ maxReplicaCount: 15 # We don't want to have more than 15 replicas
+ pollingInterval: 30 # How frequently we should go for metrics (in seconds)
+ cooldownPeriod: 10 # How many seconds should we wait for downscale
+ triggers:
+ - type: azure-queue
+ authenticationRef:
+ name: keda-az-credentials
+ metadata:
+ queueName: ${AZURE_QUEUE_NAME}
+ accountName: ${AZURE_STORAGE_ACCOUNT_NAME}
+ queueLength: '5'
+ activationQueueLength: '20' # threshold for when the scaler is active
+ cloud: AzurePublicCloud
+
+ apiVersion: keda.sh/v1alpha1
+ kind: TriggerAuthentication
+ metadata:
+ name: keda-az-credentials
+ namespace: $AQS_TARGET_NAMESPACE
+ spec:
+ podIdentity:
+ provider: azure-workload
+ identityId: '${workloadManagedIdentityClientId}'
+ EOF
+ ```
+
+ The manifest describes two resources: the **`TriggerAuthentication` object**, which specifies to KEDA that the scaled object is using pod identity for authentication, and the **`identityID` property**, which refers to the managed identity used as the workload identity.
+
+ When the scaled object is correctly installed and KEDA detects the scaling threshold is exceeded, it begins scheduling pods. If you're using k9s, you should see something like this:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-k9s-scheduling-pods.png" lightbox="media/eks-edw-deploy/sample-k9s-scheduling-pods.png" alt-text="Screenshot showing an example of the K9s view with scheduling pods.":::
+
+ If you allow the producer to fill the queue with enough messages, KEDA might need to schedule more pods than there are nodes to serve. To accommodate this, Karpenter will kick in and start scheduling nodes. If you're using k9s, you should see something like this:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-k9s-scheduling-nodes.png" lightbox="media/eks-edw-deploy/sample-k9s-scheduling-nodes.png" alt-text="Screenshot showing an example of the K9s view with scheduling nodes.":::
+
+ In these two images, notice how the number of nodes whose names contain `aks-default` increased from one to three nodes. If you stop the producer app from putting messages on the queue, eventually the consumers will reduce the queue depth below the threshold, and both KEDA and Karpenter will scale in. If you're using k9s, you should see something like this:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-k9s-reduce.png" alt-text="Screenshot showing an example of the K9s view with reduced queue depth.":::
+
+## Clean up resources
+
+You can use the cleanup script (`/deployment/infra/cleanup.sh`) in our [GitHub repository][github-repo] to remove all the resources you created.
+
+## Next steps
+
+For more information on developing and running applications in AKS, see the following resources:
+
+- [Install existing applications with Helm in AKS][helm-aks]
+- [Deploy and manage a Kubernetes application from Azure Marketplace in AKS][k8s-aks]
+- [Deploy an application that uses OpenAI on AKS][openai-aks]
+
+<!-- LINKS -->
+[eks-edw-overview]: ./eks-edw-overview.md
+[az-login]: /cli/azure/authenticate-azure-cli-interactively#interactive-login
+[az-account-list]: /cli/azure/account#az_account_list
+[az-account-set]: /cli/azure/account#az_account_set
+[github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
+[prerequisites]: ./eks-edw-overview.md#prerequisites
+[azure-resource-group]: ../azure-resource-manager/management/overview.md
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[kubectl-get]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/
+[k9s]: https://k9scli.io/
+[k9s-install]: https://k9scli.io/topics/install/
+[helm-aks]: ./kubernetes-helm.md
+[k8s-aks]: ./deploy-marketplace.md
+[openai-aks]: ./open-ai-quickstart.md
+[nap-aks]: ./node-autoprovision.md
aks Eks Edw Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-overview.md
+
+ Title: Replicate an AWS EDW workload with KEDA and Karpenter in Azure Kubernetes Service (AKS)
+description: Learn how to replicate an AWS EKS event-driven workflow (EDW) workload with KEDA and Karpenter in AKS.
+ Last updated : 06/20/2024++++
+# Replicate an AWS event-driven workflow (EDW) workload with KEDA and Karpenter in Azure Kubernetes Service (AKS)
+
+In this article, you learn how to replicate an Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) event-driven workflow (EDW) workload with [KEDA](https://keda.sh) and [Karpenter](https://karpenter.sh) in AKS.
+
+This workload is an implementation of the [competing consumers][competing-consumers] pattern using a producer/consumer app that facilitates efficient data processing by separating data production from data consumption. You use KEDA to scale pods running consumer processing and Karpenter to autoscale Kubernetes nodes.
+
+For a more detailed understanding of the AWS workload, see [Scalable and Cost-Effective Event-Driven Workloads with KEDA and Karpenter on Amazon EKS][edw-aws-eks].
+
+## Deployment process
+
+1. [**Understand the conceptual differences**](eks-edw-understand.md): Start by reviewing the differences between AWS and AKS in terms of services, architecture, and deployment.
+1. [**Rearchitect the workload**](eks-edw-rearchitect.md): Analyze the existing AWS workload architecture and identify the components or services that you need to redesign to fit AKS. You need to make changes to the workload infrastructure, application architecture, and deployment process.
+1. [**Update the application code**](eks-edw-refactor.md): Ensure your code is compatible with Azure APIs, services, and authentication models.
+1. [**Prepare for deployment**](eks-edw-prepare.md): Modify the AWS deployment process to use the Azure CLI.
+1. [**Deploy the workload**](eks-edw-deploy.md): Deploy the replicated workload in AKS and test the workload to ensure that it functions as expected.
+
+## Prerequisites
+
+- An Azure account. If you don't have one, create a [free account][azure-free] before you begin.
+- The **Owner** [Azure built-in role][azure-built-in-roles], or the **User Access Administrator** and **Contributor** built-in roles, on a subscription in your Azure account.
+- [Azure CLI][install-cli] version 2.56 or later.
+- [Azure Kubernetes Service (AKS) preview extension][aks-preview].
+- [jq][install-jq] version 1.5 or later.
+- [Python 3][install-python] or later.
+- [kubectl][install-kubectl] version 1.21.0 or later
+- [Helm][install-helm] version 3.0.0 or later
+- [Visual Studio Code][download-vscode] or equivalent.
+
+### Download the Azure application code
+
+The **completed** application code for this workflow is available in our [GitHub repository][github-repo]. Clone the repository to a directory called `aws-to-azure-edw-workshop` on your local machine by running the following command:
+
+```bash
+git clone https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws ./aws-to-azure-edw-workshop
+```
+
+After you clone the repository, navigate to the `aws-to-azure-edw-workshop` directory and start Visual Studio Code by running the following commands:
+
+```bash
+cd aws-to-azure-edw-workshop
+code .
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Understand platform differences][eks-edw-understand]
+
+<!-- LINKS -->
+[competing-consumers]: /azure/architecture/patterns/competing-consumers
+[edw-aws-eks]: https://aws.amazon.com/blogs/containers/scalable-and-cost-effective-event-driven-workloads-with-keda-and-karpenter-on-amazon-eks/
+[azure-free]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F
+[azure-built-in-roles]: /azure/role-based-access-control/built-in-roles
+[install-cli]: /cli/azure/install-azure-cli
+[aks-preview]: ./draft.md#install-the-aks-preview-azure-cli-extension
+[install-jq]: https://jqlang.github.io/jq/
+[install-python]: https://www.python.org/downloads/
+[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
+[install-helm]: https://helm.sh/docs/intro/install/
+[download-vscode]: https://code.visualstudio.com/Download
+[github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
+[eks-edw-understand]: ./eks-edw-understand.md
aks Eks Edw Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-prepare.md
+
+ Title: Prepare to deploy the event-driven workflow (EDW) workload to Azure
+description: Take the necessary steps so you can deploy the EDW workload in Azure.
+ Last updated : 06/20/2024++++
+# Prepare to deploy the event-driven workflow (EDW) workload to Azure
+
+The AWS workload sample is deployed using Bash, CloudFormation, and AWS CLI. The consumer Python app is deployed as a container. The following sections describe how the Azure workflow is different. There are changes in the Bash scripts used to deploy the Azure Kubernetes Service (AKS) cluster and supporting infrastructure. Additionally, the Kubernetes deployment manifests are modified to configure KEDA to use an Azure Storage Queue scaler in place of the Amazon Simple Queue Service (SQS) scaler.
+
+The Azure workflow uses the [AKS Node Autoprovisioning (NAP)](/azure/aks/node-autoprovision) feature, which is based on Karpenter. This feature greatly simplifies the deployment and usage of Karpenter on AKS by eliminating the need to use Helm to deploy Karpenter explicitly. However, if you have a need to deploy Karpenter directly, you can do so using the AKS [Karpenter provider on GitHub](https://github.com/Azure/karpenter-provider-azure).
+
+## Configure Kubernetes deployment manifest
+
+AWS uses a Kubernetes deployment YAML manifest to deploy the workload to EKS. The AWS deployment YAML has references to SQS and DynamoDB for KEDA scalers, so we need to change them to specify KEDA-equivalent values for the Azure scalers to use to connect to the Azure infrastructure. To do so, configure the [Azure Storage Queue KEDA scaler][azure-storage-queue-scaler].
+
+The following code snippets show example YAML manifests for the AWS and Azure implementations.
+
+### AWS implementation
+
+```yaml
+ spec:
+ serviceAccountName: $SERVICE_ACCOUNT
+ containers:
+ - name: <sqs app name>
+ image: <name of Python app container>
+ imagePullPolicy: Always
+ env:
+ - name: SQS_QUEUE_URL
+ value: https://<Url To SQS>/<region>/<QueueName>.fifo
+ - name: DYNAMODB_TABLE
+ value: <table name>
+ - name: AWS_REGION
+ value: <region>
+```
+
+### Azure implementation
+
+```yaml
+ spec:
+ serviceAccountName: $SERVICE_ACCOUNT
+ containers:
+ - name: keda-queue-reader
+ image: ${AZURE_CONTAINER_REGISTRY_NAME}.azurecr.io/aws2azure/aqs-consumer
+ imagePullPolicy: Always
+ env:
+ - name: AZURE_QUEUE_NAME
+ value: $AZURE_QUEUE_NAME
+ - name: AZURE_STORAGE_ACCOUNT_NAME
+ value: $AZURE_STORAGE_ACCOUNT_NAME
+ - name: AZURE_TABLE_NAME
+ value: $AZURE_TABLE_NAME
+```
+
+## Set environment variables
+
+Before executing any of the deployment steps, you need to set some configuration information using the following environment variables:
+
+- `K8sversion`: The version of Kubernetes deployed on the AKS cluster.
+- `KARPENTER_VERSION`: The version of Karpenter deployed on the AKS cluster.
+- `SERVICE_ACCOUNT`: The name of the service account associated with the managed identity.
+- `AQS_TARGET_DEPLOYMENT`: The name of the consumer app container deployment.
+- `AQS_TARGET_NAMESPACE`: The namespace into which the consumer app is deployed.
+- `AZURE_QUEUE_NAME`: The name of the Azure Storage Queue.
+- `AZURE_TABLE_NAME`: The name of the Azure Storage Table that stores the processed messages.
+- `LOCAL_NAME`: A simple root for resource names constructed in the deployment scripts.
+- `LOCATION`: The Azure region where the deployment is located.
+- `TAGS`: Any user-defined tags along with their associated values.
+- `STORAGE_ACCOUNT_SKU`: The Azure Storage Account SKU.
+- `ACR_SKU`: The Azure Container Registry SKU.
+- `AKS_NODE_COUNT`: The number of nodes.
+
+You can review the `environmentVariables.sh` Bash script in the `deployment` directory of our [GitHub repository][github-repo]. These environment variables have defaults set, so you don't need to update the file unless you want to change the defaults. The names of the Azure resources are created dynamically in the `deploy.sh` script and are saved in the `deploy.state` file. You can use the `deploy.state` file to create environment variables for Azure resource names.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy the EDW workload to Azure][eks-edw-deploy]
+
+<!-- LINKS -->
+[azure-storage-queue-scaler]: https://keda.sh/docs/1.4/scalers/azure-storage-queue/
+[github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
+[eks-edw-deploy]: ./eks-edw-deploy.md
aks Eks Edw Rearchitect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-rearchitect.md
+
+ Title: Rearchitect the event-driven workflow (EDW) workload for Azure Kubernetes Service (AKS)
+description: Learn about architectural differences for replicating the AWS EKS scaling with KEDA and Karpenter event-driven workflow (EDW) workload in AKS.
+ Last updated : 06/20/2024++++
+# Rearchitect the event-driven workflow (EDW) workload for Azure Kubernetes Service (AKS)
+
+Now that you understand some key platform differences between AWS and Azure relevant to this workload, let's take a look at the workflow architecture and we can change it to work on AKS.
+
+## AWS workload architecture
+
+The AWS workload is a basic example of the [competing consumers design pattern][competing-consumers]. The AWS implementation is a reference architecture for managing scale and cost for event-driven workflows using [Kubernetes][kubernetes], [Kubernetes Event-driven Autoscaling (KEDA)][keda], and [Karpenter][karpenter].
+
+A producer app generates load through sending messages to a queue, and a consumer app running in a Kubernetes pod processes the messages and writes the results to a database. KEDA manages pod autoscaling through a declarative binding to the producer queue, and Karpenter manages node autoscaling with just enough compute to optimize for cost. Authentication to the queue and the database uses OAuth-based [service account token volume projection][service-account-volume-projection].
+
+The workload consists of an AWS EKS cluster to orchestrate consumers reading messages from an Amazon Simple Queue Service (SQS) and saving processed messages to an AWS DynamoDB table. A producer app generates messages and queues them in the AWS SQS queue. KEDA and Karpenter dynamically scale the number of EKS nodes and pods used for the consumers.
+
+The following diagram represents the architecture of the EDW workload in AWS:
++
+## Map AWS services to Azure services
+
+To recreate the AWS workload in Azure with minimal changes, use an Azure equivalent for each AWS service and keep authentication methods similar to the original. This example doesn't require the [advanced features][advanced-features-service-bus-event-hub] of Azure Service Bus or Azure Event Hubs. Instead, you can use [Azure Queue Storage][azure-queue-storage] to queue up work, and [Azure Table storage][azure-table-storage] to store results.
+
+The following table summarizes the service mapping:
+
+| **Service mapping** | **AWS service** | **Azure service** |
+|:--|:|:-|
+| Queuing | Simple Queue Service | [Azure Queue Storage][azure-queue-storage] |
+| Persistence | DynamoDB (No SQL) | [Azure Table storage][azure-table-storage] |
+| Orchestration | Elastic Kubernetes Service (EKS) | [Azure Kubernetes Service (AKS)][aks] |
+| Identity | AWS IAM | [Microsoft Entra][microsoft-entra] |
+
+### Azure workload architecture
+
+The following diagram represents the architecture of the Azure EDW workload using the [AWS to Azure service mapping](#map-aws-services-to-azure-services):
++
+## Compute options
+
+Depending on cost considerations and resilience to possible node eviction, you can choose from different types of compute.
+
+In AWS, you can choose between on-demand compute (more expensive but no eviction risk) or Spot instances (cheaper but with eviction risk). In AKS, you can choose an [on-demand node pool][on-demand-node-pool] or a [Spot node pool][spot-node-pool] depending on your workload's needs.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Refactor application code for AKS][eks-edw-refactor]
+
+<!-- LINKS -->
+[competing-consumers]: /azure/architecture/patterns/competing-consumers
+[kubernetes]: https://kubernetes.io/
+[keda]: https://keda.sh/
+[karpenter]: https://karpenter.sh/
+[service-account-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection
+[advanced-features-service-bus-event-hub]: ../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
+[azure-queue-storage]: ../storage/queues/storage-queues-introduction.md
+[azure-table-storage]: ../storage/tables/table-storage-overview.md
+[aks]: ./what-is-aks.md
+[microsoft-entra]: /entra/fundamentals/whatis
+[on-demand-node-pool]: ./create-node-pools.md
+[spot-node-pool]: ./spot-node-pool.md
+[eks-edw-refactor]: ./eks-edw-refactor.md
aks Eks Edw Refactor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-refactor.md
+
+ Title: Update application code for the event-driven workflow (EDW) workload
+description: Learn how to update the application code of the AWS EKS event-driven workflow (EDW) workload to replicate it in AKS.
+ Last updated : 06/20/2024++++
+# Update application code for the event-driven workflow (EDW) workload
+
+This article outlines key application code updates to replicate the EDW workload in Azure using Azure SDKs to work with Azure services.
+
+## Data access code
+
+### AWS implementation
+
+The AWS workload relies on AWS services and their associated data access AWS SDKs. We already [mapped AWS services to equivalent Azure services][map-aws-to-azure], so we can now create the code to access data for the producer queue and consumer results database table in Python using Azure SDKs.
+
+### Azure implementation
+
+For the data plane, the producer message body (payload) is JSON, and it doesn't need any schema changes for Azure. The original consumer app saves the processed messages in a DynamoDB table. With minor modifications to the consumer app code, we can store the processed messages in an Azure Storage Table.
+
+## Authentication code
+
+### AWS implementation
+
+The AWS workload uses a resource-based policy that defines full access to an Amazon Simple Queue Service (SQS) resource:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": "sqs:*",
+ "Resource": "*"
+ }
+ ]
+}
+```
+
+The AWS workload uses a resource-based policy that defines full access to a DynamoDB resource:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": "dynamodb:*",
+ "Resource": "*"
+ }
+ ]
+}
+```
+
+In the AWS workload, you assign these policies using the AWS CLI:
+
+```bash
+aws iam create-policy --policy-name sqs-sample-policy --policy-document <filepath/filename>.json
+aws iam create-policy --policy-name dynamodb-sample-policy --policy-document <filepath/filename>.json
+aws iam create-role --role-name keda-sample-iam-role --assume-role-policy-document <filepath/filename>.json
+
+aws iam attach-role-policy --role-name keda-sample-iam-role --policy-arn=arn:aws:iam::${<AWSAccountID>}:policy/sqs-sample-policy
+aws iam attach-role-policy --role-name keda-sample-iam-role --policy-arn=arn:aws:iam::${<AWSAccountID>}:policy/dynamodb-sample-policy
+
+# Set up trust relationship Kubernetes federated identity credential and map IAM role via kubectl annotate serviceaccount
+```
+
+### Azure implementation
+
+Let's explore how to perform similar AWS service-to-service logic within the Azure environment using AKS.
+
+You apply two Azure RBAC role definitions to control data plane access to the Azure Storage Queue and the Azure Storage Table. These roles are like the resource-based policies that AWS uses to control access to SQS and DynamoDB. Azure RBAC roles aren't bundled with the resource. Instead, you assign the roles to a service principal associated with a given resource.
+
+In the Azure implementation of the EDW workload, you assign the roles to a user-assigned managed identity linked to a workload identity in an AKS pod. The Azure Python SDKs for the Azure Storage Queue and Azure Storage Table automatically use the context of the security principal to access data in both resources.
+
+You use the [**Storage Queue Data Contributor**][storage-queue-data-contributor] role to allow the role assignee to read, write, or delete against the Azure Storage Queue, and the [**Storage Table Data Contributor**][storage-table-data-contributor] role to permit the assignee to read, write, or delete data against an Azure Storage Table.
+
+The following steps show how to create a managed identity and assign the **Storage Queue Data Contributor** and **Storage Table Data Contributor** roles using the Azure CLI:
+
+1. Create a managed identity using the [`az identity create`][az-identity-create] command.
+
+ ```azurecli-interactive
+ managedIdentity=$(az identity create \
+ --resource-group $resourceGroup \
+ --name $managedIdentityName
+ ```
+
+1. Assign the **Storage Queue Data Contributor** role to the managed identity using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ principalId=$(echo $managedIdentity | jq -r `.principalId`)
+
+ az role assignment create \
+ --assignee-object-id $principalId \
+ --assignee-principal-type ServicePrincipal
+ --role "Storage Queue Data Contributor" \
+ --scope $resourceId
+ ```
+
+1. Assign the **Storage Table Data Contributor** role to the managed identity using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ az role assignment create \
+ --assignee-object-id $principalId \
+ --assignee-principal-type ServicePrincipal
+ --role "Storage Table Data Contributor" \
+ --scope $resourceId
+ ```
+
+To see a working example, refer to the `deploy.sh` script in our [GitHub repository][github-repo].
+
+## Producer code
+
+### AWS implementation
+
+The AWS workload uses the AWS boto3 Python library to interact with AWS SQS queues to configure storage queue access. The AWS IAM `AssumeRole` capability authenticates to the SQS endpoint using the IAM identity associated with the EKS pod hosting the application.
+
+```python
+import boto3
+# other imports removed for brevity
+sqs_queue_url = "https://<region>.amazonaws.com/<queueid>/source-queue.fifo"
+sqs_queue_client = boto3.client("sqs", region_name="<region>")
+response = sqs_client.send_message(
+ QueueUrl = sqs_queue_url,
+ MessageBody = 'messageBody1',
+ MessageGroupId='messageGroup1')
+```
+
+### Azure implementation
+
+The Azure implementation uses the [Azure SDK for Python][azure-sdk-python] and passwordless OAuth authentication to interact with Azure Storage Queue services. The [`DefaultAzureCredential`][default-azure-credential] Python class is workload identity aware and uses the managed identity associated with workload identity to authenticate to the storage queue.
+
+The following example shows how to authenticate to an Azure Storage Queue using the `DefaultAzureCredential` class:
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.storage.queue import QueueClient
+# other imports removed for brevity
+
+# authenticate to the storage queue.
+account_url = "https://<storageaccountname>.queue.core.windows.net"
+default_credential = DefaultAzureCredential()
+aqs_queue_client = QueueClient(account_url, queue_name=queue_name ,credential=default_credential)
+
+aqs_queue_client.create_queue()
+aqs_queue_client.send_message('messageBody1')
+```
+
+You can review the code for the queue producer (`aqs-producer.py`) in our [GitHub repository][github-repo].
+
+## Consumer code
+
+### AWS implementation
+
+The original AWS code for DynamoDB access uses the AWS boto3 Python library to interact with AWS SQS queues. The consumer part of the workload uses the same code as the producer for connecting to the AWS SQS queue to read messages. The consumer also contains Python code to connect to DynamoDB using the AWS IAM `AssumeRole` capability to authenticate to the DynamoDB endpoint using the IAM identity associated with the EKS pod hosting the application.
+
+```python
+# presumes policy deployment ahead of time such as: aws iam create-policy --policy-name <policy_name> --policy-document <policy_document.json>
+dynamodb = boto3.resource('dynamodb', region_name='<region>')
+table = dynamodb.Table('<dynamodb_table_name>')
+table.put_item(
+ Item = {
+ 'id':'<guid>',
+ 'data':jsonMessage["<message_data>"],
+ 'srcStamp':jsonMessage["<source_timestamp_from_message>"],
+ 'destStamp':'<current_timestamp_now>',
+ 'messageProcessingTime':'<duration>'
+ }
+)
+```
+
+### Azure implementation
+
+The Azure implementation uses the Azure SDK for Python to interact with Azure Storage Tables.
+
+Now you need the producer code to authenticate to Azure Storage Table. As discussed earlier, the schema used in the preceding section with DynamoDB is incompatible with Azure Storage Table. You use a table schema that's compatible with Azure Cosmos DB to store the same data as the AWS workload in DynamoDB.
+
+This following example shows the code required for Azure:
+
+```python
+from azure.storage.queue import QueueClient
+from azure.data.tables import (TableServiceClient)
+
+ creds = DefaultAzureCredential()
+ table = TableServiceClient(
+ endpoint=f"https://{storage_account_name}.table.core.windows.net/",
+ credential=creds).get_table_client(table_name=azure_table)
+
+entity={
+ 'PartitionKey': _id,
+ 'RowKey': str(messageProcessingTime.total_seconds()),
+ 'data': jsonMessage['msg'],
+ 'srcStamp': jsonMessage['srcStamp'],
+ 'dateStamp': current_dateTime
+}
+
+response = table.insert_entity(
+ table_name=azure_table,
+ entity=entity,
+ timeout=60
+)
+```
+
+Unlike DynamoDB, the Azure Storage Table code specifies both `PartitionKey` and `RowKey`. The `PartitionKey` is similar to the ID `uniqueidentifer` in DynamoDB. A `PartitionKey` is a `uniqueidentifier` for a partition in a logical container in Azure Storage Table. The `RowKey` is a `uniqueidentifier` for all the rows in a given partition.
+
+You can review the complete producer and consumer code in our [GitHub repository][github-repo].
+
+## Create container images and push to Azure Container Registry
+
+Now, you can build the container images and push them to [Azure Container Registry (ACR)][acr-intro].
+
+In the `app` directory of the cloned repository, a shell script called `docker-command.sh` builds the container images and pushes them to ACR. Open the `.sh` file and review the code. The script builds the producer and consumer container images and pushes them to ACR. For more information, see [Introduction to container registries in Azure][acr-intro] and [Push and pull images in ACR][push-pull-acr].
+
+To build the container images and push them to ACR, make sure the environment variable `AZURE_CONTAINER_REGISTRY` is set to the name of the registry you want to push the images to, then run the following command:
+
+```bash
+./app/docker-command.sh
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Prepare to deploy the EDW workload to Azure][eks-edw-prepare]
+
+<!-- LINKS -->
+[map-aws-to-azure]: ./eks-edw-rearchitect.md#map-aws-services-to-azure-services
+[storage-queue-data-contributor]: ../role-based-access-control/built-in-roles.md#storage
+[storage-table-data-contributor]: ../role-based-access-control/built-in-roles.md#storage
+[az-identity-create]: /cli/azure/identity#az_identity_create
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
+[azure-sdk-python]: https://github.com/Azure/azure-sdk-for-python
+[default-azure-credential]: ../storage/queues/storage-quickstart-queues-python.md#authorize-access-and-create-a-client-object
+[acr-intro]: ../container-registry/container-registry-intro.md
+[push-pull-acr]: ../container-registry/container-registry-get-started-docker-cli.md
+[eks-edw-prepare]: ./eks-edw-prepare.md
aks Eks Edw Understand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-understand.md
+
+ Title: Understand platform differences for the event-driven workflow (EDW) workload
+description: Learn about the key differences between the AWS and Azure platforms related to the EDW scaling workload.
+ Last updated : 06/20/2024++++
+# Understand platform differences for the event-driven workflow (EDW) workload
+
+Before you replicate the EDW workload in Azure, ensure you have a solid understanding of the operational differences between the AWS and Azure platforms.
+
+This article walks through some of the key concepts for this workload and provides links to resources for more information.
+
+## Identity and access management
+
+The AWS EDW workload uses AWS resource policies that assign AWS Identity and Access Management (IAM) roles to code running in Kubernetes pods on EKS. These roles allow those pods to access external resources such as queues or databases.
+
+Azure implements [role-based access control (RBAC)][azure-rbac] differently than AWS. In Azure, role assignments are **associated with a security principal** (user, group, managed identity, or service principal), and that security principal is associated with a resource.
+
+## Authentication between services
+
+The AWS EDW workload uses service-to-service authentication to connect with a queue and a database. AWS EKS uses `AssumeRole`, a feature of IAM, to delegate permissions to AWS services and resources. This implementation allows services to assume an IAM role that grants specific access rights, ensuring secure and limited interactions between services.
+
+For Amazon Simple Queue Service (SQS) and DynamoDB database access using service-to-service authentication, the AWS workflow uses `AssumeRole` with EKS, which is an implementation of Kubernetes [service account token volume projection][service-account-volume-projection]. In AWS, when an entity assumes an IAM role, the entity temporarily gains some extra permissions. This way, the entity can perform actions and access resources granted by the assumed role, without changing their own permissions permanently. After the assumed role's session token expires, the entity loses the extra permissions. An IAM policy is deployed that permits code running in an EKS pod to authenticate to the DynamoDB as described in the policy definition.
+
+With AKS, you can use [Microsoft Entra Managed Identity][entra-managed-id] with [Microsoft Entra Workload ID][entra-workload-id].
+
+A [user-assigned managed identity][uami] is created and granted access to an Azure Storage Table by assigning it the **Storage Table Data Contributor** role. The managed identity is also granted access to an Azure Storage Queue by assigning it the **Storage Queue Data Contributor** role. These role assignments are scoped to specific resources, allowing the managed identity to read messages in a specific Azure Storage Queue and write them to a specific Azure Storage Table. The managed identity is then mapped to a Kubernetes workload identity that will be associated with the identity of the pods where the app containers are deployed. For more information, see [Use Microsoft Entra Workload ID with AKS][use-entra-aks].
+
+On the client side, the Python Azure SDKs support a transparent means of leveraging the context of a workload identity, which eliminates the need for the developer to perform explicit authentication. Code running in a namespace/pod on AKS with an established workload identity can authenticate to external services using the mapped managed identity.
+
+## Resources
+
+The following resources can help you learn more about the differences between AWS and Azure for the technologies used in the EDW workload:
+
+| **Topic** | **AWS to Azure resource** |
+|||
+| Services | [AWS to Azure services comparison][aws-azure-services] |
+| Identity | [Mapping AWS IAM concepts to similar ones in Azure][aws-azure-identity] |
+| Accounts | [Azure AWS accounts and subscriptions][aws-azure-accounts] |
+| Resource management | [Resource containers][aws-azure-resources] |
+| Messaging | [AWS SQS to Azure Queue Storage][aws-azure-messaging] |
+| Kubernetes | [AKS for Amazon EKS professionals][aws-azure-kubernetes] |
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Rearchitect the workload for AKS][eks-edw-rearchitect]
+
+<!-- LINKS -->
+[azure-rbac]: ../role-based-access-control/overview.md
+[entra-workload-id]: /azure/architecture/aws-professional/eks-to-aks/workload-identity#microsoft-entra-workload-id-for-kubernetes
+[service-account-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection
+[entra-managed-id]: /entra/identity/managed-identities-azure-resources/overview
+[uami]: /azure/templates/microsoft.managedidentity/userassignedidentities?pivots=deployment-language-bicep
+[use-entra-aks]: ./workload-identity-overview.md#how-it-works
+[aws-azure-services]: /azure/architecture/aws-professional/services
+[aws-azure-identity]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/mapping-aws-iam-concepts-to-similar-ones-in-azure/ba-p/3612216
+[aws-azure-accounts]: /azure/architecture/aws-professional/accounts
+[aws-azure-resources]: /azure/architecture/aws-professional/resources
+[aws-azure-messaging]: /azure/architecture/aws-professional/messaging#simple-queue-service
+[aws-azure-kubernetes]: /azure/architecture/aws-professional/eks-to-aks/
+[eks-edw-rearchitect]: ./eks-edw-rearchitect.md
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
If you're interested in providing feedback or working closely on your migration
## Prerequisites
-* An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* Install the [Azure CLI](/cli/azure/install-azure-cli). If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker). * Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli). * When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
aks Howto Deploy Java Quarkus App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-quarkus-app.md
This article shows you how to quickly deploy Red Hat Quarkus on Azure Kubernetes
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Azure Cloud Shell has all of these prerequisites preinstalled. For more, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart). - If you're running the commands in this guide locally (instead of using Azure Cloud Shell), complete the following steps: - Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, or Windows Subsystem for Linux).
This article shows you how to quickly deploy Red Hat Quarkus on Azure Kubernetes
- Install [cURL](https://curl.se/download.html). - Install the [Quarkus CLI](https://quarkus.io/guides/cli-tooling). - Azure CLI for Unix-like environments. This article requires only the Bash variant of Azure CLI.
- - [!INCLUDE [azure-cli-login](../../includes/azure-cli-login.md)]
+ - [!INCLUDE [azure-cli-login](~/reusable-content/ce-skilling/azure/includes/azure-cli-login.md)]
- This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Create the app project
aks Howto Deploy Java Wls App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-wls-app.md
If you're interested in providing feedback or working closely on your migration
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Ensure the Azure identity you use to sign in and complete this article has either the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription or the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) and [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) roles in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview) For details on the specific roles required by WLS on AKS, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles). > [!NOTE] > These roles must be granted at the subscription level, not the resource group level.
aks Quick Kubernetes Deploy Azd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-azd.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- For ease of use, run this sample on Bash or PowerShell in the [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart).
aks Quick Kubernetes Deploy Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Make sure that the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). - To set up your environment for Bicep development, see [Install Bicep tools](../../azure-resource-manager/bicep/install.md). After completing the steps, you have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) version or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az). - To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip to [Review the Bicep file](#review-the-bicep-file).
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. * You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).
-* [!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
+* [!INCLUDE [About Bicep](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-bicep-introduction.md)]
### [Azure CLI](#tab/azure-cli)
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md). - Make sure that the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- For ease of use, try the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart). If you want to use PowerShell locally, then install the [Az PowerShell](/powershell/azure/new-azureps-module-az) module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. Make sure that you run the commands with administrative privileges. For more information, see [Install Azure PowerShell][install-azure-powershell].
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
- Deploy an AKS cluster using an Azure Resource Manager template. - Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Make sure that the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
aks Quick Windows Container Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-portal.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](/azure/cloud-shell/overview). - Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- For ease of use, try the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart). If you want to use PowerShell locally, then install the [Az PowerShell](/powershell/azure/new-azureps-module-az) module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. Make sure that you run the commands with administrative privileges. For more information, see [Install Azure PowerShell][install-azure-powershell].
aks Long Term Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/long-term-support.md
To carry out an in-place upgrade to the latest LTS version, you need to specify
```azurecli az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.32.2 ```
-> [!NOTE]
->If you use any programming/scripting logic to list and select a minor version of Kubernetes before creating clusters with the `ListKubernetesVersions` API, note that starting from Kubernetes v1.27, the API returns `SupportPlan` as `[KubernetesOfficial, AKSLongTermSupport]`. Please ensure you update any logic to exclude `AKSLongTermSupport` versions to avoid any breaks and choose `KubernetesOfficial` support plan versions. Otherwise, if LTS is indeed your path forward please first opt-into the Premium tier and the `AKSLongTermSupport` support plan versions from the `ListKubernetesVersions` API before creating clusters.
- > [!NOTE] > The next Long Term Support Version after 1.27 is to be determined. However Customers will get a minimum 6 months of overlap between 1.27 LTS and the next LTS version to plan upgrades. > Kubernetes 1.32.2 is used as an example version in this article. Check the [AKS release tracker](release-tracker.md) for available Kubernetes releases.
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
description: Learn how to use Key Management Service (KMS) etcd encryption with
Previously updated : 06/19/2024 Last updated : 06/26/2024 # Add Key Management Service etcd encryption to an Azure Kubernetes Service cluster
After you change the key ID (including changing either the key name or the key v
> [!WARNING] > Remember to update all secrets after key rotation. If you don't update all secrets, the secrets are inaccessible if the keys that were created earlier don't exist or no longer work. >
-> After you rotate the key, the previous key (key1) is still cached and shouldn't be deleted. If you want to delete the previous key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without affecting the existing cluster.
+> KMS uses 2 keys at the same time. After the first key rotation, you need to ensure both the old and new keys are valid (not expired) until the next key rotation. After the second key rotation, the oldest key can be safely removed/expired
```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-vault-network-access "Public" --azure-keyvault-kms-key-id $NEW_KEY_ID
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
This article assumes you have a basic understanding of Kubernetes concepts. For
## Prerequisites
-* [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* This article requires version 2.47.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. * Make sure that the identity that you're using to create your cluster has the appropriate minimum permissions. For more information about access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts]. * If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account set][az-account-set] command.
analysis-services Analysis Services Create Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md
This quickstart describes how to create an Analysis Services server resource in your Azure subscription by using [Bicep](../azure-resource-manager/bicep/overview.md). ## Prerequisites
analysis-services Analysis Services Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-powershell.md
This quickstart describes using PowerShell from the command line to create an Az
## Prerequisites - **Azure subscription**: Visit [Azure Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) to create an account. - **Microsoft Entra ID**: Your subscription must be associated with a Microsoft Entra tenant and you must have an account in that directory. To learn more, see [Authentication and user permissions](analysis-services-manage-users.md).
analysis-services Analysis Services Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-template.md
This quickstart describes how to create an Analysis Services server resource in your Azure subscription by using an Azure Resource Manager template (ARM template). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
This article describes how to set up, view, and manage [Azure Monitor resource l
![Resource logging to Storage, Event Hubs, or Azure Monitor logs](./media/analysis-services-logging/aas-logging-overview.png) ## What's logged?
analysis-services Analysis Services Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-powershell.md
This article describes PowerShell cmdlets used to perform Azure Analysis Service
Server resource management tasks like creating or deleting a server, suspending or resuming server operations, or changing the service level (tier) use Azure Analysis Services cmdlets. Other tasks for managing databases like adding or removing role members, processing, or partitioning use cmdlets included in the same SqlServer module as SQL Server Analysis Services. ## Permissions
analysis-services Analysis Services Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-scale-out.md
Return status codes:
### PowerShell Before using PowerShell, [install or update the latest Azure PowerShell module](/powershell/azure/install-azure-powershell).
analysis-services Analysis Services Server Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-admins.md
If server firewall is enabled, server administrator client computer IP addresses
## PowerShell Use [New-AzAnalysisServicesServer](/powershell/module/az.analysisservices/new-azanalysisservicesserver) cmdlet to specify the Administrator parameter when creating a new server. <br> Use [Set-AzAnalysisServicesServer](/powershell/module/az.analysisservices/set-azanalysisservicesserver) cmdlet to modify the Administrator parameter for an existing server.
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-service-principal.md
Service principal appID and password or certificate can be used in connection st
### PowerShell #### <a name="azmodule"></a>Using Az.AnalysisServices module
api-center Set Up Api Center Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-arm-template.md
[!INCLUDE [quickstart-intro](includes/quickstart-intro.md)] If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
If your environment meets the prerequisites and you're familiar with using ARM t
[!INCLUDE [include](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] * For Azure PowerShell:
- [!INCLUDE [azure-powershell-requirements-no-header.md](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header.md](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Review the template
api-center Set Up Api Center Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-bicep.md
[!INCLUDE [quickstart-intro](includes/quickstart-intro.md)] [!INCLUDE [quickstart-prerequisites](includes/quickstart-prerequisites.md)]
[!INCLUDE [include](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] * For Azure PowerShell:
- [!INCLUDE [azure-powershell-requirements-no-header.md](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header.md](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Review the Bicep file
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
To follow the steps in this article, you must have:
+ An active Azure subscription.
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+ [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
+ An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management Api Management Howto Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ca-certificates.md
The article shows how to manage CA certificates of an Azure API Management servi
CA certificates uploaded to API Management can only be used for certificate validation by the managed API Management gateway. If you use the [self-hosted gateway](self-hosted-gateway-overview.md), learn how to [create a custom CA for self-hosted gateway](#create-custom-ca-for-self-hosted-gateway), later in this article. ## <a name="step1"> </a>Upload a CA certificate
api-management Api Management Howto Configure Custom Domain Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-custom-domain-gateway.md
To perform the steps described in this article, you must have:
- An active Azure subscription.
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+ [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). - A self-hosted gateway. For more information, see [How to provision self-hosted gateway](api-management-howto-provision-self-hosted-gateway.md)
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
This article shows how to automate backup and restore operations of your API Man
> Restore operation doesn't change custom hostname configuration of the target service. We recommend to use the same custom hostname and TLS certificate for both active and standby services, so that, after restore operation completes, the traffic can be re-directed to the standby instance by a simple DNS CNAME change. ## Prerequisites
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
For architectural guidance, see:
## Prerequisites To follow the steps described in this article, you must have: * An active Azure subscription
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+ [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* Certificates - Personal Information Exchange (PFX) files for API Management's custom host names: gateway, developer portal, and management endpoint.
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md
Using key vault certificates is recommended because it helps improve API Managem
## Prerequisites * If you have not created an API Management service instance yet, see [Create an API Management service instance](get-started-create-service-instance.md). * You should have your backend service configured for client certificate authentication. To configure certificate authentication in the Azure App Service, refer to [this article][to configure certificate authentication in Azure WebSites refer to this article].
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
To set up a managed identity in the Azure portal, you'll first create an API Man
### Azure PowerShell The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
To set up a managed identity in the portal, you'll first create an API Managemen
### Azure PowerShell The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-role-based-access-control.md
Azure API Management relies on Azure role-based access control (Azure RBAC) to enable fine-grained access management for API Management services and entities (for example, APIs and policies). This article gives you an overview of the built-in and custom roles in API Management. For more information on access management in the Azure portal, see [Get started with access management in the Azure portal](../role-based-access-control/overview.md). ## Built-in service roles
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
Use API Management in internal mode to:
For configurations specific to the *external* mode, where the API Management endpoints are accessible from the public internet, and backend services are located in the network, see [Deploy your Azure API Management instance to a virtual network - external mode](api-management-using-with-vnet.md). [!INCLUDE [api-management-virtual-network-prerequisites](../../includes/api-management-virtual-network-prerequisites.md)]
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
This article explains how to set up VNet connectivity for your API Management in
For configurations specific to the *internal* mode, where the endpoints are accessible only within the VNet, see [Deploy your Azure API Management instance to a virtual network - internal mode](./api-management-using-with-internal-vnet.md). [!INCLUDE [api-management-virtual-network-prerequisites](../../includes/api-management-virtual-network-prerequisites.md)]
api-management Credentials How To User Delegated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-user-delegated.md
In this scenario, you configure a managed [connection](credentials-overview.md)
- A running API Management instance. If you need to, [create an Azure API Management instance](get-started-create-service-instance.md). - A backend OAuth 2.0 API that you want to access on behalf of the user or group. ## Step 1: Provision Azure API Management Data Plane service principal
api-management Get Started Create Service Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance-cli.md
This quickstart describes the steps for creating a new API Management instance b
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
api-management Get Started Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance.md
This quickstart describes the steps for creating a new API Management instance u
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] ## Sign in to Azure
api-management Graphql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md
If you want to import a GraphQL schema and set up field resolvers using REST or
- Azure PowerShell
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Add a GraphQL API
api-management Import Api From Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-oas.md
In this article, you learn how to:
* Azure PowerShell
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## <a name="create-api"> </a>Import a backend API
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
In this article, you learn how to:
* Azure PowerShell
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
api-management Powershell Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/powershell-create-service-instance.md
In this quickstart, you create a new API Management instance by using Azure Powe
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] ## Prerequisites ## Create resource group
api-management Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-arm-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM tem
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
api-management Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-bicep.md
This quickstart describes how to use a Bicep file to create an Azure API Managem
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] ## Prerequisites
This quickstart describes how to use a Bicep file to create an Azure API Managem
- For Azure PowerShell:
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Review the Bicep file
api-management Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-terraform.md
In this article, you learn how to:
- For Azure PowerShell:
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Implement the Terraform code
api-management V2 Service Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md
The v2 tiers are available in the following regions:
* France Central * Germany West Central * North Europe
+* West Europe
+* UK South
+* UK West
* Central India
+* Brazil South
+* Australia Central
* Australia East * Australia Southeast * East Asia
api-management Vscode Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/vscode-create-service-instance.md
This quickstart describes the steps to create a new API Management instance usin
## Prerequisites Also, ensure you've installed the following:
app-service App Service Configure Premium Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configure-premium-tier.md
az appservice plan create \
### Azure PowerShell The following command creates an App Service plan in _P1V3_. The options for `-WorkerSize` are _Small_, _Medium_, and _Large_.
app-service App Service Sql Asp Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-asp-github-actions.md
In this tutorial, you learn how to:
> - Use a GitHub Actions workflow to add resources to Azure with a Azure Resource Manager template (ARM template) > - Use a GitHub Actions workflow to build an ASP.NET Core application ## Prerequisites
app-service App Service Sql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-github-actions.md
In this tutorial, you learn how to:
> - Use a GitHub Actions workflow to add resources to Azure with a Azure Resource Manager template (ARM template) > - Use a GitHub Actions workflow to build a container with the latest web app changes ## Prerequisites
app-service App Service Web App Cloning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-app-cloning.md
# Azure App Service App Cloning Using PowerShell With the release of Microsoft Azure PowerShell version 1.1.0, a new option has been added to `New-AzWebApp` that lets you clone an existing App Service app to a newly created app in a different region or in the same region. This option enables customers to deploy a number of apps across different regions quickly and easily.
app-service App Service Web Tutorial Dotnet Sqldatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-dotnet-sqldatabase.md
In this tutorial, you learn how to:
> * Update the data model and redeploy the app > * Stream logs from Azure to your terminal ## Prerequisites
You can keep the generated web app name, or change it to another unique name (va
#### Create a resource group 1. Next to **Resource Group**, click **New**.
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
In this tutorial, you learn how to:
You can follow the steps in this tutorial on macOS, Linux, Windows. ## Prerequisites
In this step, you set up the local ASP.NET Core project. App Service supports th
1. To stop ASP.NET Core at any time, press `Ctrl+C` in the terminal. ## Deploy app to Azure
In this step, you deploy your .NET Core application to App Service.
### Create a resource group ### Create an App Service plan
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
When persistent storage is disabled, then writes to the `C:\home` directory aren
The only exception is the `C:\home\LogFiles` directory, which is used to store the container and application logs. This folder always persists upon app restarts if [application logging is enabled](troubleshoot-diagnostic-logs.md?#enable-application-logging-windows) with the **File System** option, independently of the persistent storage being enabled or disabled. In other words, enabling or disabling the persistent storage doesn't affect the application logging behavior.
-By default, persistent storage is *disabled* on Windows custom containers. To enable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `true` via the [Cloud Shell](https://shell.azure.com). In Bash:
+By default, persistent storage is *enabled* on Windows custom containers. To disable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `false` via the [Cloud Shell](https://shell.azure.com). In Bash:
```azurecli-interactive
-az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=true
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=false
``` In PowerShell: ```azurepowershell-interactive
-Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITES_ENABLE_APP_SERVICE_STORAGE"=true}
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITES_ENABLE_APP_SERVICE_STORAGE"=false}
``` ::: zone-end
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md
This how-to guide shows you how to deploy your app to [Azure App Service](overvi
To follow the steps in this how-to guide: -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [Install Git](https://www.git-scm.com/downloads).
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
This article shows you how to deploy your code as a ZIP, WAR, JAR, or EAR packag
To complete the steps in this article, [create an App Service app](./index.yml), or use an app that you created for another tutorial. [!INCLUDE [Create a project ZIP file](../../includes/app-service-web-deploy-zip-prepare.md)]
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 6/13/2024 Last updated : 6/26/2024 # Migration to App Service Environment v3 using the side-by-side migration feature
App Service can automate migration of your App Service Environment v1 and v2 to
The side-by-side migration feature automates your migration to App Service Environment v3. The side-by-side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md). > [!IMPORTANT]
-> It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+> If you fail to complete all steps described in this tutorial, you'll experience downtime. For example, if you don't update all dependent resources with the new IP addresses or you don't allow access to/from your new subnet, such as the case for your custom domain suffix key vault, you'll experience downtime until that's addressed.
+>
+> It's recommended to use this feature for dev environments first before migrating any production environments to rehearse the process and ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
> ## Supported scenarios
For related commands to check if your subscription or resource group has locks,
If your existing App Service Environment uses a custom domain suffix, you need to [configure one for your new App Service Environment v3 resource during the migration process](#add-a-custom-domain-suffix-optional). Migration fails if you don't configure a custom domain suffix and are using one currently. For more information on App Service Environment v3 custom domain suffixes, including requirements, step-by-step instructions, and best practices, see [Custom domain suffix for App Service Environments](./how-to-custom-domain-suffix.md). > [!NOTE]
-> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment v3's new subnet. If you're accessing your key vault using a private endpoint, ensure you've configured private access correctly with the new subnet.
+> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment v3's new subnet. If you're accessing your key vault using a private endpoint, ensure you've configured private access correctly with the new subnet. You experience downtime if you fail to correctly set this access prior to migration.
> You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy can be configured by setting the `zoneRedundant` property to `true`. Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time.
app-service Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using.md
Title: Use an App Service Environment
description: Learn how to use your App Service Environment to host isolated applications. Previously updated : 03/27/2023 Last updated : 06/26/2024
Every App Service app runs in an App Service plan. App Service Environments hold
When you scale an App Service plan, the needed infrastructure is added automatically. Be aware that there's a time delay to scale operations while the infrastructure is being added. For example, when you scale an App Service plan, and you have another scale operation of the same operating system and size running, there might be a delay of a few minutes until the requested scale starts.
-A scale operation on one size and operating system won't affect scaling of the other combinations of size and operating system. For example, if you are scaling a Windows I2v2 App Service plan, a scale operation to a Windows I3v2 App Service plan starts immediately. Scaling normally takes less than 15 minutes.
+A scale operation on one size and operating system won't affect scaling of the other combinations of size and operating system. For example, if you are scaling a Windows I2v2 App Service plan, a scale operation to a Windows I3v2 App Service plan starts immediately. Scaling normally takes less than 15 minutes but can take up to 45 minutes.
In a multi-tenant App Service, scaling is immediate, because a pool of shared resources is readily available to support it. App Service Environment is a single-tenant service, so there's no shared buffer, and resources are allocated based on need.
app-service Manage Scale Per App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-per-app.md
# High-density hosting on Azure App Service using per-app scaling When using App Service, you can scale your apps by scaling the [App Service plan](overview-hosting-plans.md) they run on. When multiple apps are running in the same App Service plan, each scaled-out instance runs all the apps in the plan.
app-service Provision Resource Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/provision-resource-bicep.md
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy
## Prerequisites To effectively create resources with Bicep, you'll need to set up a Bicep [development environment](../azure-resource-manager/bicep/install.md). The Bicep extension for [Visual Studio Code](https://code.visualstudio.com/) provides language support and resource autocompletion. The extension helps you create and validate Bicep files and is recommended for those developers that will create resources using Bicep after completing this quickstart.
app-service Quickstart Multi Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md
![Sample multi-container app on Web App for Containers][1] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
cd multicontainerwordpress
## Create a resource group In the Cloud Shell, create a resource group with the [`az group create`](/cli/azure/group#az-group-create) command. The following example creates a resource group named *myResourceGroup* in the *South Central US* location. To see all supported locations for App Service on Linux in **Standard** tier, run the [`az appservice list-locations --sku S1 --linux-workers-enabled`](/cli/azure/appservice#az-appservice-list-locations) command.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on custom containers, see [Run a custom container in Azure]
| Setting name| Description | Example | |-|-|-|
-| `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | Set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `false` for custom containers. ||
+| `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | For Linux custom containers: set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `false` for Linux custom containers.<br/><br/>For Windows containers: set to `true` to enable the `c:\home` directory to be shared across scaled instances. The default is `true` for Windows containers.||
| `WEBSITES_CONTAINER_START_TIME_LIMIT` | Amount of time in seconds to wait for the container to complete start-up before restarting the container. Default is `230`. You can increase it up to the maximum of `1800`. || | `WEBSITES_CONTAINER_STOP_TIME_LIMIT` | Amount of time in seconds to wait for the container to terminate gracefully. Default is `5`. You can increase to a maximum of `120` || | `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable isn't passed on to the container. | `https://<server-name>.azurecr.io` |
app-service Cli Continuous Deployment Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md
This sample script creates an app in App Service with its related resources, and
* An Azure DevOps repository with application code, that you have administrative permissions for. * A [Personal Access Token (PAT)](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate) for your Azure DevOps organization. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### To create the web app
az webapp deployment source config --name $webapp --resource-group $resourceGrou
## Clean up resources ```azurecli az group delete --name $resourceGroup
app-service Cli Linux Acr Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-linux-acr-aspnetcore.md
This sample script creates a resource group, a Linux App Service plan, and an app. It then deploys an ASP.NET Core application using a Docker Container from the Azure Container Registry. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
This sample script creates a resource group, a Linux App Service plan, and an ap
## Clean up resources ```azurecli az group delete --name $resourceGroup
app-service Powershell Backup Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-delete.md
To run this script, you need an existing backup for a web app. To create one, se
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-delete/backup-delete.ps1?highlight=1-2,11 "Delete a backup for a web app")]
app-service Powershell Backup Onetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-onetime.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-onetime/backup-onetime.ps1?highlight=1-5 "Back up a web app")]
app-service Powershell Backup Restore Diff Sub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-restore-diff-sub.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-restore-diff-sub/backup-restore-diff-sub.ps1?highlight=1-6 "Restore a web app from a backup in another subscription")]
app-service Powershell Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-restore.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-restore/backup-restore.ps1?highlight=1-2 "Restore a web app from a backup")]
app-service Powershell Backup Scheduled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-scheduled.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-scheduled/backup-scheduled.ps1?highlight=1-4 "Create a scheduled backup for a web app")]
app-service Powershell Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-configure-custom-domain.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/map-custom-domain/map-custom-domain.ps1?highlight=1 "Assign a custom domain to a web app")]
app-service Powershell Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-configure-ssl-certificate.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/configure-ssl-certificate/configure-ssl-certificate.ps1?highlight=1-3 "Bind a custom TLS/SSL certificate to a web app")]
app-service Powershell Connect To Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-connect-to-sql.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/connect-to-sql/connect-to-sql.ps1?highlight=13 "Connect an app to SQL Database")]
app-service Powershell Connect To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-connect-to-storage.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/connect-to-storage/connect-to-storage.ps1 "Connect an app to a storage account")]
app-service Powershell Continuous Deployment Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-continuous-deployment-github.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-github-continuous/deploy-github-continuous.ps1?highlight=1-2 "Create a web app with continuous deployment from GitHub")]
app-service Powershell Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-ftp.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-ftp/deploy-ftp.ps1?highlight=1 "Upload files to a web app using FTP")]
app-service Powershell Deploy Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-github.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-github/deploy-github.ps1?highlight=1-2 "Create a web app and deploy code from GitHub")]
app-service Powershell Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-local-git.md
If needed, update to the latest Azure PowerShell using the instruction found in
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-local-git/deploy-local-git.ps1?highlight=1 "Create a web app and deploy code from a local Git repository")]
app-service Powershell Deploy Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-private-endpoint.md
This sample script creates an app in App Service with its related resources, and then deploys a Private Endpoint. ## Sample script
app-service Powershell Deploy Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-staging-environment.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-deployment-slot/deploy-deployment-slot.ps1?highlight=1 "Create a web app and deploy code to a staging environment")]
app-service Powershell Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-monitor.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/monitor-with-logs/monitor-with-logs.ps1 "Monitor a web app with web server logs")]
app-service Powershell Scale High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-scale-high-availability.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/scale-geographic/scale-geographic.ps1 "Scale a web app worldwide with a high-availability architecture")]
app-service Powershell Scale Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-scale-manual.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/scale-manual/scale-manual.ps1 "Scale a web app manually")]
app-service Template Deploy Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/template-deploy-private-endpoint.md
In this quickstart, you use an Azure Resource Manager (ARM) template to create a web app and expose it with a private endpoint. ## Prerequisite
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md
When you set up a domain or TLS/SSL certificate for your web apps in Azure App S
At any point in this article, you can get more help by contacting Azure experts on the [Microsoft Q & A and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, to file an Azure support incident, go to the [Azure Support site](https://azure.microsoft.com/support/options/), and select **Get Support**. ## Certificate problems
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
Before your source code is executed on the frontend, the App Service injects the
## Prerequisites - [Node.js (LTS)](https://nodejs.org/download/) [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
app-service Tutorial Connect App Access Sql Database As User Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-sql-database-as-user-dotnet.md
What you will learn:
> [!NOTE] >Microsoft Entra authentication is _different_ from [Integrated Windows authentication](/previous-versions/windows/it-pro/windows-server-2003/cc758557(v=ws.10)) in on-premises Active Directory (AD DS). AD DS and Microsoft Entra ID use completely different authentication protocols. For more information, see [Microsoft Entra Domain Services documentation](../active-directory-domain-services/index.yml). ## Prerequisites
If you haven't already, follow one of the two tutorials first. Alternatively, yo
Prepare your environment for the Azure CLI. <a name='1-configure-database-server-with-azure-ad-authentication'></a>
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
What you will learn:
> * Connect to the Azure database from your code (.NET Framework 4.8, .NET 6, Node.js, Python, Java) using a managed identity. > * Connect to the Azure database from your development environment using the Microsoft Entra user. ## Prerequisites
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
What you will learn:
> [!NOTE] >Microsoft Entra authentication is _different_ from [Integrated Windows authentication](/previous-versions/windows/it-pro/windows-server-2003/cc758557(v=ws.10)) in on-premises Active Directory (AD DS). AD DS and Microsoft Entra ID use completely different authentication protocols. For more information, see [Microsoft Entra Domain Services documentation](../active-directory-domain-services/index.yml). ## Prerequisites
app-service Tutorial Custom Container Sidecar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container-sidecar.md
For more information about sidecars, see [Sidecar pattern](/azure/architecture/p
> [!NOTE] > For the preview period, sidecar support must be enabled at app creation. There's currently no way to enable sidecar support for an existing app. ## 1. Set up the needed resources
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
In this tutorial, you learn how to:
> * Stream diagnostic logs from App Service > * Add additional instances to scale out the sample app ## Prerequisites
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
> * Configure a Tomcat web application to use Microsoft Entra authentication with PostgreSQL Database. > * Connect to PostgreSQL Database with Managed Identity using Service Connector. ## Prerequisites
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
In this tutorial, you learn how to:
> * Connect to Azure Database for MySQL > * Troubleshoot errors ## Prerequisites
cd multicontainerwordpress
## Create a resource group In Cloud Shell, create a resource group with the [`az group create`](/cli/azure/group#az-group-create) command. The following example creates a resource group named *myResourceGroup* in the *South Central US* location. To see all supported locations for App Service on Linux in **Standard** tier, run the [`az appservice list-locations --sku S1 --linux-workers-enabled`](/cli/azure/appservice#az-appservice-list-locations) command.
app-service Tutorial Multi Region App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-region-app.md
What you'll learn:
## Prerequisites To complete this tutorial:
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
This tutorial shows how to create a secure PHP app in Azure App Service that's c
:::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-2.png" alt-text="Screenshot of the Azure app example titled Task List showing new tasks added."::: ## Sample application
app-service Tutorial Secure Ntier App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-ntier-app.md
What you'll learn:
The tutorial uses two sample Node.js apps that are hosted on GitHub. If you don't already have a GitHub account, [create an account for free](https://github.com/). To complete this tutorial:
app-service Tutorial Troubleshoot Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-troubleshoot-monitor.md
In this tutorial, you learn how to:
You can follow the steps in this tutorial on macOS, Linux, Windows. ## Prerequisites
application-gateway Application Gateway Configure Ssl Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-ssl-policy-powershell.md
Learn how to configure TLS/SSL policy versions and cipher suites on Application Gateway. You can select from a list of predefined policies that contain different configurations of TLS policy versions and enabled cipher suites. You also have the ability to define a [custom TLS policy](#configure-a-custom-tls-policy) based on your requirements. > [!NOTE] > We recommend using TLS 1.2 as your minimum TLS protocol version for better security on your Application Gateway.
application-gateway Application Gateway Create Probe Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-ps.md
In this article, you add a custom probe to an existing application gateway with PowerShell. Custom probes are useful for applications that have a specific health check page or for applications that do not provide a successful response on the default web application. [!INCLUDE [azure-ps-prerequisites-include.md](../../includes/azure-ps-prerequisites-include.md)]
application-gateway Application Gateway End To End Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-end-to-end-ssl-powershell.md
This scenario will:
## Before you begin To configure end-to-end TLS with an application gateway, a certificate is required for the gateway and certificates are required for the backend servers. The gateway certificate is used to derive a symmetric key as per TLS protocol specification. The symmetric key is then used encrypt and decrypt the traffic sent to the gateway. The gateway certificate needs to be in Personal Information Exchange (PFX) format. This file format allows you to export the private key that is required by the application gateway to perform the encryption and decryption of traffic.
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ilb-arm.md
This article walks you through the steps to configure a Standard v1 Application
## Before you begin 1. Install the latest version of the Azure PowerShell module by following the [install instructions](/powershell/azure/install-azure-powershell). 2. You create a virtual network and a subnet for Application Gateway. Make sure that no virtual machines or cloud deployments are using the subnet. Application Gateway must be by itself in a virtual network subnet.
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-troubleshooting-502.md
Learn how to troubleshoot bad gateway (502) errors received when using Azure Application Gateway. ## Overview
application-gateway Certificates For Backend Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/certificates-for-backend-authentication.md
Previously updated : 12/27/2022 Last updated : 06/27/2024
From your TLS/SSL certificate, export the public key .cer file (not the private
1. To obtain a .cer file from the certificate, open **Manage user certificates**. Locate the certificate, typically in 'Certificates - Current User\Personal\Certificates', and right-click. Click **All Tasks**, and then click **Export**. This opens the **Certificate Export Wizard**. If you want to open Certificate Manager in current user scope using PowerShell, you type *certmgr* in the console window.
-> [!NOTE]
-> If you can't find the certificate under Current User\Personal\Certificates, you may have accidentally opened "Certificates - Local Computer", rather than "Certificates - Current User").
+ > [!NOTE]
+ > If you can't find the certificate under Current User\Personal\Certificates, you may have accidentally opened "Certificates - Local Computer", rather than "Certificates - Current User").
![Screenshot shows the Certificate Manager with Certificates selected and a contextual menu with All tasks, then Export selected.](./media/certificates-for-backend-authentication/export.png)
-1. In the Wizard, click **Next**.
+2. In the Wizard, click **Next**.
![Export certificate](./media/certificates-for-backend-authentication/exportwizard.png)
-1. Select **No, do not export the private key**, and then click **Next**.
+3. Select **No, do not export the private key**, and then click **Next**.
![Do not export the private key](./media/certificates-for-backend-authentication/notprivatekey.png)
-1. On the **Export File Format** page, select **Base-64 encoded X.509 (.CER).**, and then click **Next**.
+4. On the **Export File Format** page, select **Base-64 encoded X.509 (.CER).**, and then click **Next**.
![Base-64 encoded](./media/certificates-for-backend-authentication/base64.png)
-1. For **File to Export**, **Browse** to the location to which you want to export the certificate. For **File name**, name the certificate file. Then, click **Next**.
+5. For **File to Export**, **Browse** to the location to which you want to export the certificate. For **File name**, name the certificate file. Then, click **Next**.
![Screenshot shows the Certificate Export Wizard where you specify a file to export.](./media/certificates-for-backend-authentication/browse.png)
-1. Click **Finish** to export the certificate.
+6. Click **Finish** to export the certificate.
![Screenshot shows the Certificate Export Wizard after you complete the file export.](./media/certificates-for-backend-authentication/finish-screen.png)
-1. Your certificate is successfully exported.
+7. Your certificate is successfully exported.
![Screenshot shows the Certificate Export Wizard with a success message.](./media/certificates-for-backend-authentication/success.png)
From your TLS/SSL certificate, export the public key .cer file (not the private
![Screenshot shows a certificate symbol.](./media/certificates-for-backend-authentication/exported.png)
-1. If you open the exported certificate using Notepad, you see something similar to this example. The section in blue contains the information that is uploaded to application gateway. If you open your certificate with Notepad and it doesn't look similar to this, typically this means you didn't export it using the Base-64 encoded X.509(.CER) format. Additionally, if you want to use a different text editor, understand that some editors can introduce unintended formatting in the background. This can create problems when uploaded the text from this certificate to Azure.
+8. If you open the exported certificate using Notepad, you see something similar to this example. The section in blue contains the information that is uploaded to application gateway. If you open your certificate with Notepad and it doesn't look similar to this, typically this means you didn't export it using the Base-64 encoded X.509(.CER) format. Additionally, if you want to use a different text editor, understand that some editors can introduce unintended formatting in the background. This can create problems when uploaded the text from this certificate to Azure.
![Open with Notepad](./media/certificates-for-backend-authentication/format.png)
application-gateway Classic To Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/classic-to-resource-manager.md
Previously updated : 02/10/2022 Last updated : 06/27/2024
For more information on how to set up an Application Gateway resource after VNet
The word "classic" in classic networking service refers to networking resources managed by Azure Service Manager (ASM). Azure Service Manager (ASM) is the old control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations.
+> [!NOTE]
+> To view all the classic resources in your subscription, Open the **All Resources** blade and look for a **(Classic)** suffix after the resource name.
+ ### What is Azure Resource Manager? Azure Resource Manager is the latest control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations.
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
Previously updated : 09/14/2023 Last updated : 06/27/2024
A frontend IP address is associated to a *listener*, which checks for incoming r
You can create private and public listeners with the same port number. However, be aware of any network security group (NSG) associated with the Application Gateway subnet. Depending on your NSG's configuration, you might need an allow-inbound rule with **Destination IP addresses** as your application gateway's public and private frontend IPs. When you use the same port, your application gateway changes the **Destination** of the inbound flow to the frontend IPs of your gateway.
+> [!NOTE]
+> Currently, the use of the same port number for public and private TCP/TLS protocol or IPv6 listeners is not supported.
+ **Inbound rule**: - **Source**: According to your requirement
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
# Application Gateway listener configuration A listener is a logical entity that checks for incoming connection requests by using the port, protocol, host, and IP address. When you configure the listener, you must enter values for these that match the corresponding values in the incoming request on the gateway.
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
Configuring the gateway using a frontend private IP address is useful for intern
This article guides you through the steps to configure a Standard v2 Application Gateway with an ILB using the Azure portal. ## Sign in to Azure
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-multiple-sites-portal.md
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
application-gateway Create Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-ssl-portal.md
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md
In this article, you learn how to:
:::image type="content" source="./media/application-gateway-create-url-route-portal/scenario.png" alt-text="Diagram of application gateway URL routing example." lightbox="./media/application-gateway-create-url-route-portal/scenario.png"::: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
application-gateway How To Troubleshoot Application Gateway Session Affinity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-to-troubleshoot-application-gateway-session-affinity-issues.md
Learn how to diagnose and resolve session affinity issues with Azure Application Gateway. ## Overview
application-gateway Ipv6 Application Gateway Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ipv6-application-gateway-arm-template.md
# Deploy an Azure Application Gateway with an IPv6 frontend - ARM template If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
This article primarily helps with the configuration migration. Client traffic mi
* If a public IP address is provided, ensure that it's in a succeeded state. If not provided and AppGWResourceGroupName is provided ensure that public IP resource with name AppGWV2Name-IP doesnΓÇÖt exist in a resource group with the name AppGWResourceGroupName in the V1 subscription. * Ensure that no other operation is planned on the V1 gateway or any associated resources during migration. > [!IMPORTANT] >Run the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context. This is not a mandatory step for version 1.0.11 & above of the migration script.
application-gateway Mutual Authentication Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-powershell.md
This article describes how to use the PowerShell to configure mutual authenticat
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The following table displays a comparison between Basic and Standard_v2.
| Feature | Capabilities | Basic SKU (preview)| Standard SKU | | :: | : | :: | :: | | Reliability | SLA | 99.9 | 99.95 |
-| Functionality - basic | HTTP/HTTP2/HTTPS<br>Websocket<br>Public/Private IP<br>Cookie Affinity<br>Path-based affinity<br>Wildcard<br>Multisite<br>KeyVault<br>AKS (via AGIC)<br>Zone<br>Header rewrite | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br> | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; |
+| Functionality - basic | HTTP/HTTP2/HTTPS<br>Websocket<br>Public/Private IP<br>Cookie Affinity<br>Path-based affinity<br>Wildcard<br>Multisite<br>KeyVault<br>AKS (via AGIC)<br>Zone<br>Header rewrite | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; |
| Functionality - advanced | URL rewrite<br>mTLS<br>Private Link<br>Private-only<sup>1</sup><br>TCP/TLS Proxy | | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; | | Scale | Max. connections per second<br>Number of listeners<br>Number of backend pools<br>Number of backend servers per pool<br>Number of rules | 200<sup>1</sup><br>5<br>5<br>5<br>5 | 62500<sup>1</sup><br>100<br>100<br>1200<br>400 | | Capacity Unit | Connections per second per compute unit<br>Throughput<br>Persistent new connections | 10<br>2.22 Mbps<br>2500 | 50<br>2.22 Mbps<br>2500 |
application-gateway Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-bicep.md
In this quickstart, you use Bicep to create an Azure Application Gateway. Then you test the application gateway to make sure it works correctly. The Standard v2 SKU is used in this example. :::image type="content" source="./media/quick-create-portal/application-gateway-qs-resources.png" alt-text="Conceptual diagram of the quickstart setup." lightbox="./media/quick-create-portal/application-gateway-qs-resources.png":::
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
The application gateway directs application web traffic to specific resources in
You can also complete this quickstart using [Azure PowerShell](quick-create-powershell.md) or the [Azure portal](quick-create-portal.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-powershell.md
You can also complete this quickstart using [Azure CLI](quick-create-cli.md) or
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Azure PowerShell version 1.0.0 or later](/powershell/azure/install-azure-powershell) (if you run Azure PowerShell locally). ## Connect to Azure
application-gateway Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-template.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to
:::image type="content" source="./media/quick-create-portal/application-gateway-qs-resources.png" alt-text="Conceptual diagram of the quickstart setup." lightbox="./media/quick-create-portal/application-gateway-qs-resources.png"::: You can also complete this quickstart using the [Azure portal](quick-create-portal.md), [Azure PowerShell](quick-create-powershell.md), or [Azure CLI](quick-create-cli.md).
application-gateway Redirect External Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-external-site-cli.md
In this article, you learn how to:
* Create a listener and redirection rule * Create an application gateway [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Redirect External Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-external-site-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Redirect Http To Https Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-cli.md
In this article, you learn how to:
* Add a listener and redirection rule * Create a Virtual Machine Scale Set with the default backend pool [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Redirect Http To Https Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-portal.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This tutorial requires the Azure PowerShell module version 1.0.0 or later to create a certificate and install IIS. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). To run the commands in this tutorial, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Redirect Http To Https Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). To run the commands in this tutorial, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Redirect Internal Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-cli.md
In this article, you learn how to:
* Create a virtual machine scale set with the backend pool * Create a CNAME record in your domain [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Redirect Internal Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Renew Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/renew-certificates.md
Upload your new PFX certificate, give it a name, type the password, and then sel
### Azure PowerShell To renew your certificate using Azure PowerShell, use the following script:
application-gateway Create Vmss Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-cli.md
This script creates an application gateway that uses a virtual machine scale set
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)] ## Sample script
application-gateway Create Vmss Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-powershell.md
This script creates an application gateway that uses a virtual machine scale set
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] ## Sample script
application-gateway Waf Custom Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/waf-custom-rules-powershell.md
If you choose to install and use Azure PowerShell locally, this script requires
1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). 2. To create a connection with Azure, run `Connect-AzAccount`. ## Sample script
application-gateway Tutorial Autoscale Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-autoscale-ps.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites This tutorial requires that you run an administrative Azure PowerShell session locally. You must have Azure PowerShell module version 1.0.0 or later installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
In this tutorial, you learn how to:
> * Deploy a sample application using AGIC for ingress on the AKS cluster > * Check that the application is reachable through application gateway [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
In this tutorial, you learn how to:
> * Deploy a sample application by using AGIC for ingress on the AKS cluster. > * Check that the application is reachable through application gateway. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Manage Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-manage-web-traffic-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Manage Web Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-powershell.md
If you prefer, you can complete this procedure using [Azure CLI](tutorial-manage
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-multiple-sites-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Multiple Sites Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-ssl-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md
The following example shows site traffic coming from both ports 8080 and 8081 an
If you prefer, you can complete this tutorial using [Azure PowerShell](tutorial-url-redirect-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Url Redirect Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-powershell.md
If you prefer, you can complete this procedure using [Azure CLI](tutorial-url-re
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this procedure requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
application-gateway Tutorial Url Route Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-url-route-powershell.md) or the [Azure portal](create-url-route-portal.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Url Route Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-powershell.md
If you prefer, you can complete this procedure using [Azure CLI](tutorial-url-ro
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
attestation Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-Bicep.md
Last updated 03/08/2022
[Microsoft Azure Attestation](overview.md) is a solution for attesting Trusted Execution Environments (TEEs). This quickstart focuses on the process of deploying a Bicep file to create a Microsoft Azure Attestation policy. ## Prerequisites
attestation Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-template.md
Last updated 01/30/2024
[Microsoft Azure Attestation](overview.md) is a solution for attesting Trusted Execution Environments (TEEs). This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a Microsoft Azure Attestation policy. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
automation Automation Alert Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-alert-metric.md
Title: Monitor Azure Automation runbooks with metric alerts
description: This article describes how to setup a metric alert based on runbook completion status. Last updated 08/10/2020-+ # Monitor runbooks with metric alerts
automation Automation Dsc Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-diagnostics.md
Azure Monitor Logs provides greater operational visibility to your Automation St
- Correlate compliance status across Automation accounts. - Use custom views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics. ## Prerequisites
automation Automation Runbook Execution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-execution.md
Title: Runbook execution in Azure Automation
description: This article provides an overview of the processing of runbooks in Azure Automation. Previously updated : 12/28/2022 Last updated : 06/27/2024
The following diagram shows the lifecycle of a runbook job for [PowerShell runbo
![Job Statuses - PowerShell Workflow](./media/automation-runbook-execution/job-statuses.png) ## Runbook execution environment
The following table describes the statuses that are possible for a job. You can
| Stopping |The system is stopping the job. | | Suspended |Applies to [graphical and PowerShell Workflow runbooks](automation-runbook-types.md) only. The job was suspended by the user, by the system, or by a command in the runbook. If a runbook doesn't have a checkpoint, it starts from the beginning. If it has a checkpoint, it can start again and resume from its last checkpoint. The system only suspends the runbook when an exception occurs. By default, the `ErrorActionPreference` variable is set to Continue, indicating that the job keeps running on an error. If the preference variable is set to Stop, the job suspends on an error. | | Suspending |Applies to [graphical and PowerShell Workflow runbooks](automation-runbook-types.md) only. The system is trying to suspend the job at the request of the user. The runbook must reach its next checkpoint before it can be suspended. If it has already passed its last checkpoint, it completes before it can be suspended. |
+| New | The job has been submitted recently but is not yet activated.|
## Activity logging
automation Automation Tutorial Installed Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-tutorial-installed-software.md
First you need to enable Change tracking and Inventory for this tutorial. If you
2. Choose the [Log Analytics](../azure-monitor/logs/log-query-overview.md) workspace. This workspace collects data that is generated by features such as Change Tracking and Inventory. The workspace provides a single location to review and analyze data from multiple sources. 3. Select the Automation account to use.
automation Manage Inventory Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-inventory-vms.md
The following sections provide information about each property that can be confi
Inventory allows you to create and view machine groups in Azure Monitor logs. Machine groups are collections of machines defined by a query in Azure Monitor logs. To view your machine groups select the **Machine groups** tab on the Inventory page.
automation Automation Region Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/automation-region-dns-records.md
Title: Azure Datacenter DNS records used by Azure Automation | Microsoft Docs
description: This article provides the DNS records required by Azure Automation features when restricting communication to a specific Azure region hosting that Automation account. Previously updated : 06/29/2021 Last updated : 06/28/2024
To support [Private Link](../../private-link/private-link-overview.md) in Azure
| China East 2 |`https://<accountId>.webhook.sha2.azure-automation.cn`<br>`https://<accountId>.agentsvc.sha2.azure-automation.cn`<br>`https://<accountId>.jrds.sha2.azure-automation.cn` | | China North |`https://<accountId>.webhook.bjb.azure-automation.cn`<br>`https://<accountId>.agentsvc.bjb.azure-automation.cn`<br>`https://<accountId>.jrds.bjb.azure-automation.cn` | | China North 2 |`https://<accountId>.webhook.bjs2.azure-automation.cn`<br>`https://<accountId>.agentsvc.bjs2.azure-automation.cn`<br>`https://<accountId>.jrds.bjs2.azure-automation.cn` |
+| China North 3 | `https://<accountId>.webhook.cnn3.azure-automation.cn` </br> `https://<accountId>.agentsvc.cnn3.azure-automation.cn` </br> `https://<accountId>.jrds.cnn3.azure-automation.cn` |
| West Europe |`https://<accountId>.webhook.we.azure-automation.net`<br>`https://<accountId>.agentsvc.we.azure-automation.net`<br>`https://<accountId>.jrds.we.azure-automation.net` | | North Europe |`https://<accountId>.webhook.ne.azure-automation.net`<br>`https://<accountId>.agentsvc.ne.azure-automation.net`<br>`https://<accountId>.jrds.ne.azure-automation.net` | | France Central |`https://<accountId>.webhook.fc.azure-automation.net`<br>`https://<accountId>.agentsvc.fc.azure-automation.net`<br>`https://<accountId>.jrds.fc.azure-automation.net` |
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
These Azure services can work with Automation job and runbook resources using an
* [Azure Event Grid](../event-grid/handler-webhooks.md) * [Azure Power Automate](/connectors/azureautomation) ## Pricing for Azure Automation
automation Quickstart Create Automation Account Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstart-create-automation-account-template.md
Azure Automation delivers a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. This article shows you how to deploy an Azure Resource Manager template (ARM template) that creates an Automation account. Using an ARM template takes fewer steps compared to other deployment methods. The JSON template specifies default values for parameters that would likely be used as a standard configuration in your environment. You can store the template in an Azure storage account for shared access in your organization. For more information about working with templates, see [Deploy resources with ARM templates and the Azure CLI](../azure-resource-manager/templates/deploy-cli.md). The sample template does the following steps:
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md
> [!CAUTION] > This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> [!NOTE]
+> Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+ By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows and Linux servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure Linux VM and deploying a LAMP stack using Azure Automation State Configuration. ## Prerequisites
automation Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/credentials.md
An Automation credential asset holds an object that contains security credential
>[!NOTE] >Secure assets in Azure Automation include credentials, certificates, connections, and encrypted variables. These assets are encrypted and stored in Azure Automation using a unique key that is generated for each Automation account. Azure Automation stores the key in the system-managed Key Vault. Before storing a secure asset, Automation loads the key from Key Vault and then uses it to encrypt the asset. ## PowerShell cmdlets used to access credentials
automation Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/onboarding.md
After you remove the feature resources, you can unlink your workspace. It's impo
## <a name="mma-extension-failures"></a>Log Analytics for Windows extension failures An installation of the Log Analytics agent for Windows extension can fail for a variety of reasons. The following section describes feature deployment issues that can cause failures during deployment of the Log Analytics agent for Windows extension.
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
In this tutorial, you learn how to:
## Prerequisites Finish the quickstart [Create a .NET app with App Configuration](./quickstart-dotnet-core-app.md).
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
In this tutorial, you learn how to:
- [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above. - An existing Azure App Configuration Store. ## Setup Push Refresh
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
In this article, you learn how to set up Azure App Configuration event subscript
- Azure subscription - [create one for free](https://azure.microsoft.com/free/). You can optionally use the Azure Cloud Shell. If you choose to install and use the CLI locally, this article requires that you're running the latest version of Azure CLI (2.0.70 or later). To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
To complete this tutorial, you must have:
:::zone-end ## Add a managed identity
azure-app-configuration Howto Leverage Json Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-leverage-json-content-type.md
In this tutorial, you'll learn how to:
> * Export JSON key-values to a JSON file. > * Consume JSON key-values in your applications. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-bicep.md
This quickstart describes how you can use Bicep to:
- Create key-values in an App Configuration store. - Read key-values in an App Configuration store. ## Prerequisites
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-resource-manager.md
This quickstart describes how to:
> [!TIP] > Feature flags and Key Vault references are special types of key-values. Check out the [Next steps](#next-steps) for examples of creating them using the ARM template. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
azure-app-configuration Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-create-service.md
This sample script creates a new instance of Azure App Configuration using the Azure CLI in a new resource group. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Cli Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-delete-service.md
This sample script deletes an instance of Azure App Configuration using the Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Cli Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-export.md
This sample script exports key-values from an Azure App Configuration store. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Cli Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-import.md
This sample script imports key-value settings to an Azure App Configuration store. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Cli Work With Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-work-with-keys.md
This sample script shows how to:
* Update the value of a newly created key * Delete the new key-value pair [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Powershell Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-create-service.md
This sample script creates a new instance of Azure App Configuration in a new resource group using PowerShell. To execute the sample scripts, you need a functional setup of [Azure PowerShell](/powershell/azure/).
azure-app-configuration Powershell Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-delete-service.md
This sample script deletes an instance of Azure App Configuration using PowerShell. To execute this sample script, you need a functional setup of [Azure PowerShell](/powershell/azure/).
azure-app-configuration Use Key Vault References Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
In this tutorial, you learn how to:
Before you start this tutorial, install the [.NET SDK 6.0 or later](https://dotnet.microsoft.com/download). ## Create a vault
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
For more information, see [Tutorial: Deploy applications using GitOps with Flux
The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. > [!IMPORTANT]
-> Eventually, a major version update (v2.x.x) for the `microsoft.flux` extension will be released. When this happens, clusters won't be auto-upgraded to this version, since [auto-upgrade is only supported for minor version releases](extensions.md#upgrade-extension-instance). If you're still using an older API version when the next major version is released, you'll need to update your manifests to the latest API versions, perform any necessary testing, then upgrade your extension manually. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
+> The [Flux v2.3.0 release](https://fluxcd.io/blog/2024/05/flux-v2.3.0/) includes API changes to the HelmRelease and HelmChart APIs, with deprecated fields removed. An upcoming minor version update of Microsoft's Flux extension will include these changes, consistent with the upstream OSS Flux project.
+>
+> The [HelmRelease](https://fluxcd.io/flux/components/helm/helmreleases/) kind will be promoted from `v2beta1` to `v2` (GA). The `v2` API is backwards compatible with `v2beta1`, with the exception of these deprecated fields, which will be removed:
+>
+> - `.spec.chart.spec.valuesFile`: replaced by `.spec.chart.spec.valuesFiles`
+> - `.spec.postRenderers.kustomize.patchesJson6902`: replaced by `.spec.postRenderers.kustomize.patches`
+> - `.spec.postRenderers.kustomize.patchesStrategicMerge`: replaced by `.spec.postRenderers.kustomize.patches`
+> - `.status.lastAppliedRevision`: replaced by `.status.history.chartVersion`
+>
+> The [HelmChart](https://fluxcd.io/flux/components/source/helmcharts/) kind will be promoted from `v1beta2` to `v1` (GA). The `v1` API is backwards compatible with `v1beta2`, with the exception of the `.spec.valuesFile` field, which will be replaced by `.spec.valuesFiles`.
+>
+> To avoid issues due to breaking changes, we recommend updating your deployments by July 22, 2024, so that they stop using the fields that will be removed and use the replacement fields instead. These new fields are already available in the current version of the APIs.
> [!NOTE] > When a new version of the `microsoft.flux` extension is released, it may take several days for the new version to become available in all regions.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Once your Kubernetes clusters are connected to Azure, at scale you can:
- [Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) * Deploy and manage Kubernetes applications targeted for Azure Arc-Enabled Kubernetes clusters from Azure Marketplace.
- [!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
+ [!INCLUDE [azure-lighthouse-supported-service](~/reusable-content/ce-skilling/azure/includes/azure-lighthouse-supported-service.md)]
## Next steps
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
In this tutorial, you'll set up a CI/CD solution using GitOps with Azure Arc-ena
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Before you begin
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
In this tutorial, you'll set up a CI/CD solution using GitOps with Flux v2 and A
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
azure-arc Arc Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/arc-gateway.md
+
+ Title: How to simplify network configuration requirements through Azure Arc gateway (Limited preview)
+description: Learn how to simplify network configuration requirements through Azure Arc gateway (Limited preview).
Last updated : 06/26/2024+++
+# Simplify network configuration requirements through Azure Arc gateway (Limited preview)
+
+> [!NOTE]
+> **This is a Limited Public Preview, so customer subscriptions must be allowed by Microsoft to use the feature. To participate, complete the [Azure Arc gateway Limited Public Preview Sign-up form](https://forms.office.com/r/bfTkU2i0Qw).**
+>
+
+If you use enterprise firewalls or proxies to manage outbound traffic, the Azure Arc gateway lets you onboard infrastructure to Azure Arc using only seven (7) endpoints. With Azure Arc gateway, you can:
+
+- Connect to Azure Arc by opening public network access to only seven Fully Qualified Domains (FQDNs).
+- View and audit all traffic an Azure Connected Machine agent sends to Azure via the Arc gateway.
+
+This article explains how to set up and use an Arc gateway Resource.
+
+> [!IMPORTANT]
+> The Arc gateway feature for [Azure Arc-enabled servers](overview.md) is currently in Limited preview in all regions where Azure Arc-enabled servers is present. See the Supplemental Terms of Use for Microsoft Azure Limited previews for legal terms that apply to Azure features that are in beta, limited preview, or otherwise not yet released into general availability.
+>
+
+## Supported scenarios
+
+Azure Arc gateway supports the following scenarios:
+
+- Azure Monitor (Azure Monitor Agent + Dependency Agent) <sup>1</sup>
+- Microsoft Defender for Cloud <sup>2</sup>
+- Windows Admin Center
+- SSH
+- Microsoft Sentinel
+- Azure Update Management
+- Azure Extension for SQL Server
+
+<sup>1</sup> Traffic to Log Analytics workspaces isn't covered by Arc gateway, so the FQDNs for your Log Analytics workspaces must still be allowed in your firewalls or enterprise proxies.
+
+<sup>2</sup> To send Microsoft Defender traffic via Arc gateway, you must configure the extensionΓÇÖs proxy settings.
+
+## How it works
+
+Azure Arc gateway consists of two main components:
+
+**The Arc gateway resource:** An Azure resource that serves as a common front-end for Azure traffic. This gateway resource is served on a specific domain. Once the Arc gateway resource is created, the domain is returned to you in the success response.
+
+**The Arc Proxy:** A new component added to Arc agentry. This component runs as a service called "Azure Arc Proxy" and acts as a forward proxy used by the Azure Arc agents and extensions. No configuration is required on your part for the gateway router. This router is part of Arc core agentry and runs within the context of an Arc-enabled resource.
+
+When the gateway is in place, traffic flows via the following hops: **Arc agentry → Arc Proxy → Enterprise proxy → Arc gateway → Target service**
++
+## Restrictions and limitations
+
+The Arc gateway object has limits you should consider when planning your setup. These limitations apply only to the Limited public preview. These limitations might not apply when the Arc gateway feature is generally available.
+
+- TLS Terminating Proxies aren't supported.
+- ExpressRoute/Site-to-Site VPN used with the Arc gateway (Limited preview) isn't supported.
+- The Arc gateway (Limited preview) is only supported for Azure Arc-enabled servers.
+- There's a limit of five Arc gateway (Limited preview) resources per Azure subscription.
+
+## How to use the Arc gateway (Limited preview)
+
+After completing the [Azure Arc gateway Limited Public Preview Sign-up form](https://forms.office.com/r/bfTkU2i0Qw), your subscription will be allowed to use the feature within 1 business day. You'll receive an email when the Arc gateway (Limited preview) feature has been allowed on the subscription you submitted.
+
+There are six main steps to use the feature:
+
+1. Download the az connected.whl file and use it to install the az connectedmachine extension.
+1. Create an Arc gateway resource.
+1. Ensure the required URLs are allowed in your environment.
+1. Associate new or existing Azure Arc resources with your Arc gateway resource.
+1. Verify that the setup succeeded.
+1. Ensure other scenarios use the Arc gateway (Linux only).
+
+### Step 1: Download the az connectedmachine.whl file
+
+1. Select the link to [download the az connectedmachine.whl file](https://aka.ms/ArcGatewayWhl).
+
+ This file contains the az connected machine commands required to create and manage your gateway Resource.
+
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) (if you haven't already).
+
+1. Execute the following command to add the connectedmachine extension:
+
+ `az extension add --allow-preview true --source [whl file path]`
+
+### Step 2: Create an Arc gateway resource
+
+On a machine with access to Azure, run the following commands to create your Arc gateway resource:
+
+```azurecli
+az login --use-device-code
+az account set --subscription [subscription name or id]
+az connectedmachine gateway create --name [Your gatewayΓÇÖs Name] --resource-group [Your Resource Group] --location [Location] --gateway-type public --allowed-features * --subscription [subscription name or id]
+```
+The gateway creation process takes 9-10 minutes to complete.
+
+### Step 3: Ensure the required URLs are allowed in your environment
+
+When the resource is created, the success response includes the Arc gateway URL. Ensure your Arc gateway URL and all URLs in the following table are allowed in the environment where your Arc resources live:
+
+|URL |Purpose |
+|||
+|[Your URL Prefix].gw.arc.azure.com |Your gateway URL (This URL can be obtained by running `az connectedmachine gateway list` after you create your gateway Resource) |
+|management.azure.com |Azure Resource Manager Endpoint, required for Azure Resource Manager control channel |
+|login.microsoftonline.com |Microsoft Entra IDΓÇÖs endpoint, for acquiring Identity access tokens |
+|gbl.his.arc.azure.com |The cloud service endpoint for communicating with Azure Arc agents |
+|\<region\>.his.arc.azure.com |Used for ArcΓÇÖs core control channel |
+|packages.microsoft.com |Required to acquire Linux based Arc agentry payload, only needed to connect Linux servers to Arc |
+|download.microsoft.com |Used to download the Windows installation package |
+
+### Step 4: Associate new or existing Azure Arc resources with your gateway resource
+
+**To onboard a new server with Arc gateway**, generate an installation script, then edit the script to specify your gateway resource:
+
+1. Generate the installation script.
+ Follow the instructions at [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md) to create a script that automates the downloading and installation of the Azure Connected Machine agent and establishes the connection with Azure Arc.
+
+1. Edit the installation script.
+ Your gateway Resource must be specific in the installation script. To accomplish this, a new parameter called `--gateway-id` is added to the connect command.
+
+ **For Linux servers:**
+
+ 1. Obtain your gateway's Resource ID by running the `az connectedmachine gateway list` command. Note the "id" parameter in the output (that is, the full ARM resource ID).
+ 1. In the installation script, add the "id" found in the previous step as the following parameter: `--gateway-id "[Your-gatewayΓÇÖs-Resource-ID]"`
+
+ Linux server onboarding script example:
+
+ This script template includes parameters for you to specify your enterprise proxy server.
+
+ ```
+ export subscriptionId="SubscriptionId";
+ export resourceGroup="ResourceGroup";
+ export tenantId="TenantID";
+ export location="Region";
+ export authType="AuthType";
+ export cloud="AzureCloud";
+ export gatewayID="gatewayResourceID";
+
+ # Download the installation package
+ output=$(wget https://aka.ms/azcmagent -e use_proxy=yes -e https_proxy="[Your Proxy URL]" -O /tmp/install_linux_azcmagent.sh 2>&1);
+ if [ $? != 0 ]; then wget -qO- -e use_proxy=yes -e https_proxy="[Your Proxy URL]" --method=PUT --body-data="{\"subscriptionId\":\"$subscriptionId\",\"resourceGroup\":\"$resourceGroup\",\"tenantId\":\"$tenantId\",\"location\":\"$location\",\"correlationId\":\"$correlationId\",\"authType\":\"$authType\",\"operation\":\"onboarding\",\"messageType\":\"DownloadScriptFailed\",\"message\":\"$output\"}" "https://gbl.his.arc.azure.com/log" &> || true; fi;
+ echo "$output";
+
+ # Install the hybrid agent
+ bash /tmp/install_linux_azcmagent.sh --proxy "[Your Proxy URL]";
+
+ # Run connect command
+ sudo azcmagent connect --resource-group "$resourceGroup" --tenant-id "$tenantId" --location "$location" --subscription-id "$subscriptionId" --cloud "$cloud" --correlation-id "$correlationId" --gateway-id "$gatewayID";
+ ```
+
+ **For Windows servers:**
+
+ 1. Obtain your gateway's Resource ID by running the `az connectedmachine gateway list` command. This command outputs information about all the gateway resources in your subscription. Note the ID parameter in the output (that is, the full ARM resource ID).
+ 1. In the **try section** of the installation script, add the ID found in the previous step as the following parameter: `--gateway-id "[Your-gatewayΓÇÖs-Resource-ID]"`
+ 1. In the **catch section** of the installation script, add the ID found in the previous step as the following parameter: `gateway-id="[Your-gatewayΓÇÖs-Resource-ID]"`
+
+ Windows server onboarding script example:
+
+ This script template includes parameters for you to specify your enterprise proxy server.
+
+ ```
+ $global:scriptPath = $myinvocation.mycommand.definition
+
+ function Restart-AsAdmin {
+ ΓÇ» ΓÇ» $pwshCommand = "powershell"
+ ΓÇ» ΓÇ» if ($PSVersionTable.PSVersion.Major -ge 6) {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» $pwshCommand = "pwsh"
+ ΓÇ» ΓÇ» }
+
+ ΓÇ» ΓÇ» try {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» Write-Host "This script requires administrator permissions to install the Azure Connected Machine Agent. Attempting to restart script with elevated permissions..."
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» $arguments = "-NoExit -Command `"& '$scriptPath'`""
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» Start-Process $pwshCommand -Verb runAs -ArgumentList $arguments
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» exit 0
+ ΓÇ» ΓÇ» } catch {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» throw "Failed to elevate permissions. Please run this script as Administrator."
+ ΓÇ» ΓÇ» }
+ }
+
+ try {
+ ΓÇ» ΓÇ» if (-not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» if ([System.Environment]::UserInteractive) {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» Restart-AsAdmin
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» } else {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» throw "This script requires administrator permissions to install the Azure Connected Machine Agent. Please run this script as Administrator."
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ ΓÇ» ΓÇ» }
+
+ ΓÇ» ΓÇ» $env:SUBSCRIPTION_ID = "SubscriptionId";
+ ΓÇ» ΓÇ» $env:RESOURCE_GROUP = "ResourceGroup";
+ ΓÇ» ΓÇ» $env:TENANT_ID = "TenantID";
+ ΓÇ» ΓÇ» $env:LOCATION = "Region";
+ ΓÇ» ΓÇ» $env:AUTH_TYPE = "AuthType";
+ ΓÇ» ΓÇ» $env:CLOUD = "AzureCloud";
+ $env:GATEWAY_ID = "gatewayResourceID";
+
+ ΓÇ» ΓÇ» [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor 3072;
+
+ ΓÇ» ΓÇ» # Download the installation package
+ ΓÇ» ΓÇ» Invoke-WebRequest -UseBasicParsing -Uri "https://aka.ms/azcmagent-windows" -TimeoutSec 30 -OutFile "$env:TEMP\install_windows_azcmagent.ps1" -proxy "[Your Proxy URL]";
+
+ ΓÇ» ΓÇ» # Install the hybrid agent
+ ΓÇ» ΓÇ» & "$env:TEMP\install_windows_azcmagent.ps1" -proxy "[Your Proxy URL]";
+ ΓÇ» ΓÇ» if ($LASTEXITCODE -ne 0) { exit 1; }
+
+ ΓÇ» ΓÇ» # Run connect command
+ ΓÇ» ΓÇ» & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "$env:RESOURCE_GROUP" --tenant-id "$env:TENANT_ID" --location "$env:LOCATION" --subscription-id "$env:SUBSCRIPTION_ID" --cloud "$env:CLOUD" --gateway-id "$env:GATEWAY_ID";
+ }
+ catch {
+ ΓÇ» ΓÇ» $logBody = @{subscriptionId="$env:SUBSCRIPTION_ID";resourceGroup="$env:RESOURCE_GROUP";tenantId="$env:TENANT_ID";location="$env:LOCATION";authType="$env:AUTH_TYPE";gatewayId="$env:GATEWAY_ID";operation="onboarding";messageType=$_.FullyQualifiedErrorId;message="$_";};
+ ΓÇ» ΓÇ» Invoke-WebRequest -UseBasicParsing -Uri "https://gbl.his.arc.azure.com/log" -Method "PUT" -Body ($logBody | ConvertTo-Json) -proxy "[Your Proxy URL]" | out-null;
+ ΓÇ» ΓÇ» Write-Host ΓÇ»-ForegroundColor red $_.Exception;
+ }
+ ```
+
+
+1. Run the installation script to onboard your servers to Azure Arc.
+
+To configure an existing machine to use Arc gateway, follow these steps:
+
+> [!NOTE]
+> The existing machine must be using the Arc-enabled servers connected machine agent version 1.43 or higher to use the Arc gateway Limited Public preview.
+
+1. Associate your existing machine with your Arc gateway resource:
+
+ ```azurecli
+ az connectedmachine setting update --resource-group [res-group] --subscription [subscription name] --base-provider Microsoft.HyrbridCompute --base-resource-type machines --base-resource-name [Arc-server's resource name] --settings-resource-name default --gateway-resource-id [Full Arm resourceid]
+ ```
+
+1. Update the machine to use the Arc gateway resource.
+ Run the following command on the Arc-enabled server to set it to use Arc gateway:
+
+ ```azurecli
+ azcmagent config set connection.type gateway
+ ```
+1. Await reconciliation.
+
+ Once your machines have been updated to use the Arc gateway, some Azure Arc endpoints that were previously allowed in your enterprise proxy or firewalls won't be needed. However, there's a transition period, so allow **1 hour** before removing unneeded endpoints from your firewall/enterprise proxy.
+
+### Step 5: Verify that the setup succeeded
+On the onboarded server, run the following command: `azcmagent show`
+The result should indicate the following values:
+
+- **Agent Status** should show as **Connected**.
+- **Using HTTPS Proxy** should show as **http://localhost:40343**
+- **Upstream Proxy** should show as your enterprise proxy (if you set one)
+
+Additionally, to verify successful set-up, you can run the following command: `azcmagent check`
+The result should indicate that the `connection.type` is set to gateway, and the **Reachable** column should indicate **true** for all URLs.
+
+### Step 6: Ensure additional scenarios use the Arc gateway (Linux only)
+
+On Linux, to use Azure Monitor or Microsoft Defender for Endpoint, additional commands need to be executed to work with the Azure Arc gateway (Limited preview).
+
+For **Azure Monitor**, explicit proxy settings should be provided when deploying Azure Monitor Agent. From Azure Cloud Shell, execute the following commands:
+
+```
+$settings = @{"proxy" = @{mode = "application"; address = "http://127.0.0.1:40343"; auth = false}}
+
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settings
+```
+
+If youΓÇÖre deploying Azure Monitor through the Azure portal, be sure to select the **Use Proxy** setting and set the **Proxy Address** to `http://127.0.0.1:40343`.
+
+For **Microsoft Defender for Endpoint**, run the following command:
+
+`mdatp config proxy set --value http://127.0.0.1:40343`
+
+## Cleanup instructions
+
+To clean up your gateway, detach the gateway resource from the applicable server(s); the resource can then be deleted safely:
+
+1. Set the connection type of the Azure Arc-enabled server to "direct" instead of "gateway":
+
+ `azcmagent config set connection.type direct`
+
+1. Run the following command to delete the resource:
+
+ `az connectedmachine gateway delete --resource group [resource group name] --gateway-name [gateway resource name]`
+
+ This operation can take couple of minutes.
+
+## Troubleshooting
+
+You can audit your Arc gatewayΓÇÖs traffic by viewing the gateway RouterΓÇÖs logs.
+
+To view gateway Router logs on **Windows**:
+1. Run `azcmagent logs` in PowerShell.
+1. In the resulting .zip file, the logs are located in the `C:\ProgramData\Microsoft\ArcGatewayRouter` folder.
+
+To view gateway Router logs on **Linux**:
+1. Run `sudo azcmagent logs`.
+1. In the resulting log file, the logs are located in the `/usr/local/arcrtr/logs/` folder.
+
+## Known issues
+
+It's not yet possible to use the Azure CLI to disassociate a gateway Resource from an Arc-enabled server. To make an Arc-enabled server stop using an Arc gateway, use the `azcmagent config set connection.type direct` command. This command configures the Arc-enabled resource to use the direct route instead of the Arc gateway.
+
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
To connect hybrid machines to Azure, you install the [Azure Connected Machine ag
You can install the Connected Machine agent manually, or on multiple machines at scale, using the [deployment method](deployment-options.md) that works best for your scenario. > [!NOTE] > For additional guidance regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](../choose-service.md).
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Other Azure services through Azure Arc-enabled servers are available as well, wi
* [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) - As part of the cloud security posture management (CSPM) pillar, it provides server protections through [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers.md) to help protect you from various cyber threats and vulnerabilities. * [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) - Collect security-related events and correlate them with other data sources.-
- >[!NOTE]
- >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
-
+
## Prepare delivery of ESUs Plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) to establish a connection to Azure. Windows Server 2012 Extended Security Updates supports Windows Server 2012 and R2 Standard and Datacenter editions. Windows Server 2012 Storage is not supported.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager. Previously updated : 04/12/2024 Last updated : 06/27/2024 ms.
Arc-enabled System Center VMM allows you to:
- Empower developers and application teams to self-serve VM operations on demand using [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). - Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments. - Discover and onboard existing SCVMM managed VMs to Azure.-- Install the Arc-connected machine agents at scale on SCVMM VMs to [govern, protect, configure, and monitor them](../servers/overview.md#supported-cloud-operations).
+- Install the Azure connected machine agent at scale on SCVMM VMs to [govern, protect, configure, and monitor them](../servers/overview.md#supported-cloud-operations).
> [!NOTE] > For more information regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](../choose-service.md).
The following image shows the architecture for the Arc-enabled SCVMM:
- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there might, in fact, not even be a host hypervisor in some cases. - Azure Arc-enabled SCVMM is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on an SCVMM VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled SCVMM also provides guest operating system management, in fact, it uses the same components as Azure Arc-enabled servers.
-You have the flexibility to start with either option, or incorporate the other one later without any disruption. With both options, you'll enjoy the same consistent experience.
+You have the flexibility to start with either option, and incorporate the other one later without any disruption. With both options, you'll enjoy the same consistent experience.
### Supported scenarios
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere? description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 04/12/2024 Last updated : 06/27/2024
Arc-enabled VMware vSphere allows you to:
- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control](../../role-based-access-control/overview.md) (RBAC). -- Install the Arc-connected machine agent at scale on VMware VMs to [govern, protect, configure, and monitor](../servers/overview.md#supported-cloud-operations) them.
+- Install the Azure connected machine agent at scale on VMware VMs to [govern, protect, configure, and monitor](../servers/overview.md#supported-cloud-operations) them.
- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.
azure-cache-for-redis Cache Configure Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure-role-based-access-control.md
Managing access to your Azure Cache for Redis instance is critical to ensure tha
Azure Cache for Redis now integrates this ACL functionality with Microsoft Entra ID to allow you to configure your Data Access Policies for your application's service principal and managed identity.
-Azure Cache for Redis offers three built-in access policies: _Owner_, _Contributor_, and _Reader_. If the built-in access policies don't satisfy your data protection and isolation requirements, you can create and use your own custom data access policy as described in [Configure custom data access policy](#configure-a-custom-data-access-policy-for-your-application).
+Azure Cache for Redis offers three built-in access policies: _Data Owner_, _Data Contributor_, and _Data Reader_. If the built-in access policies don't satisfy your data protection and isolation requirements, you can create and use your own custom data access policy as described in [Configure custom data access policy](#configure-a-custom-data-access-policy-for-your-application).
## Scope of availability
azure-cache-for-redis Cache Event Grid Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-cli.md
Azure Event Grid is an eventing service for the cloud. In this quickstart, you'l
Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this quickstart, you'll send events to a web app that will collect and display the messages. When you complete the steps described in this quickstart, you'll see that the event data has been sent to the web app. If you choose to install and use the CLI locally, this quickstart requires that you're running the latest version of Azure CLI (2.0.70 or later). To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
azure-cache-for-redis Cache Event Grid Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-portal.md
Azure Event Grid is an eventing service for the cloud. In this quickstart, you'll use the Azure portal to create an Azure Cache for Redis instance, subscribe to events for that instance, trigger an event, and view the results. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this quickstart, you'll send events to a web app that will collect and display the messages. When you're finished, you'll see that the event data has been sent to the web app.
azure-cache-for-redis Cache Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-nodejs-get-started.md
The latest builds of [node_redis](https://github.com/mranney/node_redis) provide
// Connection configuration const cacheConnection = redis.createClient({
- // rediss for TLS
- url: `rediss://${cacheHostName}:6380`,
+ // redis for TLS
+ url: `redis://${cacheHostName}:6380`,
password: cachePassword });
azure-cache-for-redis Cache Redis Cache Arm Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-cache-arm-provision.md
Last updated 04/10/2024
Learn how to create an Azure Resource Manager template (ARM template) that deploys an Azure Cache for Redis. The cache can be used with an existing storage account to keep diagnostic data. You also learn how to define which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this template for your own deployments, or customize it to meet your requirements. Currently, diagnostic settings are shared for all caches in the same region for a subscription. Updating one cache in the region affects all other caches in the region. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
azure-cache-for-redis Cache Redis Cache Bicep Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-cache-bicep-provision.md
Last updated 04/10/2024
Learn how to use Bicep to deploy a cache using Azure Cache for Redis. After you deploy the cache, use it with an existing storage account to keep diagnostic data. Learn how to define which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this Bicep file for your own deployments, or customize it to meet your requirements. ## Prerequisites
azure-cache-for-redis Cache Web App Bicep With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-bicep-with-redis-cache-provision.md
In this article, you use Bicep to deploy an Azure Web App that uses Azure Cache for Redis, as well as an App Service plan. You can use this Bicep file for your own deployments. The Bicep file provides unique names for the Azure Web App, the App Service plan, and the Azure Cache for Redis. If you'd like, you can customize the Bicep file after you save it to your local device to meet your requirements.
azure-cache-for-redis Cache Web App Cache Aside Leaderboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-cache-aside-leaderboard.md
In this tutorial, you learn how to:
> * Provision the Azure resources for the application using a Resource Manager template. > * Publish the application to Azure using Visual Studio. ## Prerequisites
azure-cache-for-redis Create Manage Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-cache.md
In this scenario, you learn how to create an Azure Cache for Redis. You then learn to get details of an Azure Cache for Redis instance, including provisioning status, the hostname, ports, and keys for an Azure Cache for Redis instance. Finally, you learn to delete the cache. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
az group delete --resource-group $resourceGroup -y
## Clean up resources ```azurecli az group delete --reourceg $resourceGroup
azure-cache-for-redis Create Manage Premium Cache Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-premium-cache-cluster.md
In this scenario, you learn how to create a 6 GB Premium tier Azure Cache for Redis with clustering enabled and two shards. You then learn to get details of an Azure Cache for Redis instance, including provisioning status, the hostname, ports, and keys for an Azure Cache for Redis instance. Finally, you learn to delete the cache. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
In this scenario, you learn how to create a 6 GB Premium tier Azure Cache for Re
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-compute-fleet Quickstart Create Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-compute-fleet/quickstart-create-rest-api.md
This article steps through using an ARM template to create an Azure Compute Fleet. ## Prerequisites
For more information on assigning roles, seeΓÇ»[assign Azure roles using the Azu
## ARM template ARM templates let you deploy groups of related resources. In a single template, you can create the Virtual Machine Scale Set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create extra scale sets. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
azure-functions Create Resources Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-resources-azure-powershell.md
This article contains the following examples:
[!INCLUDE [azure-powershell-requirements](../../includes/azure-powershell-requirements.md)] ## Create a serverless function app for C#
azure-functions Durable Functions Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-first-csharp.md
To complete this tutorial:
* Make sure that you have version 3.1 or a later version of the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. ## <a name="create-an-azure-functions-project"></a>Create your local project
To complete this tutorial:
* Verify that you have the [Azurite Emulator](../../storage/common//storage-use-azurite.md) installed and running. ## Create a function app project
azure-functions Durable Functions Isolated Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-isolated-create-first-csharp.md
To complete this tutorial:
* Make sure that you have version 3.1 or a later version of the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. ## <a name="create-an-azure-functions-project"></a>Create your local project
To complete this tutorial:
* Verify that you have the [Azurite Emulator](../../storage/common/storage-use-azurite.md) installed and running. ## Create a function app project
azure-functions Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md
To complete this tutorial, you need:
- An Azure Storage account, which requires that you have an Azure subscription. ::: zone pivot="create-option-manual-setup"
azure-functions Quickstart Js Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md
To complete this tutorial:
* Make sure that you have version 18.x+ of [Node.js](https://nodejs.org/) installed. ::: zone-end ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Quickstart Powershell Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-powershell-vscode.md
To complete this tutorial:
* Durable Functions require an Azure storage account. You need an Azure subscription. ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
To complete this tutorial:
* Make sure that you have version 3.7, 3.8, 3.9, or 3.10 of [Python](https://www.python.org/) installed. ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Quickstart Ts Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md
To complete this tutorial:
::: zone-end * Make sure that you have [TypeScript](https://www.typescriptlang.org/) v4.x+ installed. ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The instrumentation key for Application Insights. Don't use both `APPINSIGHTS_IN
Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. Use of `APPLICATIONINSIGHTS_CONNECTION_STRING` is recommended. ## APPLICATIONINSIGHTS_CONNECTION_STRING
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
description: Learn to use the Azure SQL input binding in Azure Functions.
Previously updated : 6/20/2024 Last updated : 6/26/2024 zone_pivot_groups: programming-languages-set-functions
The following example shows a SQL input binding in a function.json file and a Py
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
The following example shows a SQL input binding in a Python function that is [tr
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
The stored procedure `dbo.DeleteToDo` must be created on the database. In this
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
description: Learn to use the Azure SQL output binding in Azure Functions.
Previously updated : 6/20/2024 Last updated : 6/26/2024 zone_pivot_groups: programming-languages-set-functions
The following example shows a SQL output binding in a function.json file and a P
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
CREATE TABLE dbo.RequestLog (
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python from datetime import datetime import json
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
- devx-track-js - devx-track-python - ignite-2023 Previously updated : 6/24/2024 Last updated : 6/26/2024 zone_pivot_groups: programming-languages-set-functions-lang-workers
The following example shows a Python function that is invoked when there are cha
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
Title: Register Azure Functions binding extensions description: Learn to register an Azure Functions binding extension based on your environment. Previously updated : 03/19/2022 Last updated : 06/26/2024 # Register Azure Functions binding extensions
The following table indicates when and how you register bindings.
## <a name="extension-bundles"></a>Extension bundles
-By default, extension bundles are used by Java, JavaScript, PowerShell, Python, C# script, and Custom Handler function apps to work with binding extensions. In cases where extension bundles can't be used, you can explicitly install binding extensions with your function app project. Extension bundles are supported for version 2.x and later version of the Functions runtime.
+By default, extension bundles provide binding support for functions in these languages:
+++ Java++ JavaScript++ PowerShell++ Python++ C# script++ Other (custom handlers)+
+In rare cases where extension bundles can't be used, you can explicitly install binding extensions with your function app project. Extension bundles are supported for version 2.x and later version of the Functions runtime.
Extension bundles are a way to add a pre-defined set of compatible binding extensions to your function app. Extension bundles are versioned. Each version contains a specific set of binding extensions that are verified to work together. Select a bundle version based on the extensions that you need in your app.
The following table lists the currently available version ranges of the default
> [!NOTE]
-> Even though host.json supports custom ranges for `version`, you should use a version range value from this table, such as `[4.0.0, 5.0.0)`.
+> Even though host.json supports custom ranges for `version`, you should use a version range value from this table, such as `[4.0.0, 5.0.0)`. For a complete list of extension bundle releases and extension versions in each release, see the [extension bundles release page](https://github.com/Azure/azure-functions-extension-bundles/releases).
## Explicitly install extensions
-For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings).
+For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings). See the [extension bundles release page](https://github.com/Azure/azure-functions-extension-bundles/releases) to review combinations of extension versions that are verified compatible.
For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. For more information, see [func extensions install](functions-core-tools-reference.md#func-extensions-install).
azure-functions Functions Create First Function Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-function-bicep.md
In this article, you use Azure Functions with Bicep to create a function app and
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. After you create the function app, you can deploy Azure Functions project code to that app.
azure-functions Functions Create First Function Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-function-resource-manager.md
In this article, you use Azure Functions with an Azure Resource Manager template
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
azure-functions Functions Create First Java Gradle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-java-gradle.md
To develop functions using Java, you must have the following installed:
- [Azure Functions Core Tools](./functions-run-local.md#v2) version 2.6.666 or above - [Gradle](https://gradle.org/), version 6.8 and above
-You also need an active Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+You also need an active Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
> [!IMPORTANT] > The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
azure-functions Functions Create First Quarkus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md
In this article, you'll develop, build, and deploy a serverless Java app to Azur
## Prerequisites * The [Azure CLI](/cli/azure/overview) installed on your own computer.
-* An [Azure account](https://azure.microsoft.com/). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* An [Azure account](https://azure.microsoft.com/). [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* [Java JDK 17](/azure/developer/java/fundamentals/java-support-on-azure) with `JAVA_HOME` configured appropriately. This article was written with Java 17 in mind, but Azure Functions and Quarkus also support older versions of Java. * [Apache Maven 3.8.1+](https://maven.apache.org).
azure-functions Functions Create Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-app-portal.md
Please review the [known issues](./recover-python-functions.md#development-issue
## Prerequisites ## Sign in to Azure
azure-functions Functions Create Maven Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-eclipse.md
This article shows you how to create a [serverless](https://azure.microsoft.com/
<!-- TODO ![Access a Hello World function from the command line with cURL](media/functions-create-java-maven/hello-azure.png) --> ## Set up your development environment
azure-functions Functions Create Maven Kotlin Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-kotlin-intellij.md
This article shows you how to create an HTTP-triggered Java function in an IntelliJ IDEA project, run and debug the project in the integrated development environment (IDE), and finally deploy the function project to a function app in Azure. ## Set up your development environment
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
Unless otherwise noted, procedures and examples shown are for Visual Studio 2022
- Visual Studio 2022, including the **Azure development** workload. - Other resources that you need, such as an Azure Storage account, are created in your subscription during the publishing process. -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
## Create an Azure Functions project
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
In this tutorial, you'll:
> * Create and test Java functions that interact with these resources. > * Deploy your functions to Azure and monitor them with Application Insights. ## Prerequisites
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Title: Guidance for developing Azure Functions
description: Learn the Azure Functions concepts and techniques that you need to develop functions in Azure, across all programming languages and bindings. ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e Previously updated : 09/06/2023 Last updated : 06/26/2024 zone_pivot_groups: programming-languages-set-functions
These tools integrate with [Azure Functions Core Tools](./functions-develop-loca
::: zone pivot="programming-language-javascript,programming-language-typescript" Portal editing is only supported for [Node.js version 3](functions-reference-node.md?pivots=nodejs-model-v3), which uses the function.json file. ::: zone-end
-Portal editing is only supported for [Python version 1](functions-reference-python.md?pivots=python-mode-configuration), which uses the function.json file.
## Deployment
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
Title: Azure Functions scale and hosting
description: Compare the various options you need to consider when choosing a hosting plan in which to run your function app in Azure Functions. ms.assetid: 5b63649c-ec7f-4564-b168-e0a74cb7e0f3 Previously updated : 05/10/2024 Last updated : 06/27/2024 # Azure Functions hosting options
This table shows operating system support for the hosting options.
| **[Dedicated plan]** | ✅ Code-only<br/>✅ Container | ✅ Code-only | | **[Container Apps]** | ✅ Container-only | ❌ Not supported |
-<sup>1</sup> Linux is the only supported operating system for the [Python runtime stack](./functions-reference-python.md).
-<sup>2</sup> Windows deployments are code-only. Functions doesn't currently support Windows containers.
+1. Linux is the only supported operating system for the [Python runtime stack](./functions-reference-python.md).
+2. Windows deployments are code-only. Functions doesn't currently support Windows containers.
[!INCLUDE [Timeout Duration section](../../includes/functions-timeout-duration.md)]
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| **[Flex Consumption plan]** | [Per-function scaling](./flex-consumption-plan.md#per-function-scaling). Event-driven scaling decisions are calculated on a per-function basis, which provides a more deterministic way of scaling the functions in your app. With the exception of HTTP, Blob storage (Event Grid), and Durable Functions, all other function trigger types in your app scale on independent instances. All HTTP triggers in your app scale together as a group on the same instances, as do all Blob storage (Event Grid) triggers. All Durable Functions triggers also share instances and scale together. | Limited only by total memory usage of all instances across a given region. For more information, see [Instance memory](flex-consumption-plan.md#instance-memory). | | **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding more instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-100<sup>2</sup>| | **[Dedicated plan]**<sup>3</sup> | Manual/autoscale |10-30<br/>100 (ASE)|
+| **[Container Apps]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding more instances of the Functions host, based on the number of events that its functions are triggered on. | 10-300<sup>4</sup> |
-
-<sup>1</sup> During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
-<sup>2</sup> In some regions, Linux apps on a Premium plan can scale to 100 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
-<sup>3</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
+1. During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
+2. In some regions, Linux apps on a Premium plan can scale to 100 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
+3. For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
+4. On Container Apps, you can set the [maximum number of replicas](../container-apps/scale-app.md#scale-definition), which is honored as long as there's enough cores quota available.
## Cold start behavior | Plan | Details | | -- | -- |
-| **[Consumption plan]** | Apps can scale to zero when idle, meaning some requests might have more latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from prewarmed placeholder functions that already have the function host and language processes running. |
+| **[Consumption plan]** | Apps can scale to zero when idle, meaning some requests might have more latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from prewarmed placeholder functions that already have the host and language processes running. |
| **[Flex Consumption plan]** | Supports [always ready instances](./flex-consumption-plan.md#always-ready-instances) to reduce the delay when provisioning new instances. | | **[Premium plan]** | Supports [always ready instances](./functions-premium-plan.md#always-ready-instances) to avoid cold starts by letting you maintain one or more _perpetually warm_ instances. | | **[Dedicated plan]** | When running in a Dedicated plan, the Functions host can run continuously on a prescribed number of instances, which means that cold start isn't really an issue. |
+| **[Container Apps]** | Depends on the [minimum number of replicas](../container-apps/scale-app.md#scale-definition):<br/> ΓÇó When set to zero: apps can scale to zero when idle and some requests might have more latency at startup.<br/>ΓÇó When set to one or more: the host process runs continuously, which means that cold start isn't an issue. |
## Service limits
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| **[Flex Consumption plan]** | Billing is based on number of executions, the memory of instances when they're actively executing functions, plus the cost of any [always ready instances](./flex-consumption-plan.md#always-ready-instances). For more information, see [Flex Consumption plan billing](flex-consumption-plan.md#billing). | **[Premium plan]** | Premium plan is based on the number of core seconds and memory used across needed and prewarmed instances. At least one instance per plan must always be kept warm. This plan provides the most predictable pricing. | | **[Dedicated plan]** | You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps.<br/><br/>For an ASE, there's a flat monthly rate that pays for the infrastructure and doesn't change with the size of the environment. There's also a cost per App Service plan vCPU. All apps hosted in an ASE are in the Isolated pricing SKU. For more information, see the [ASE overview article](../app-service/environment/overview.md#pricing). |
+| **[Container Apps]** | Billing in Azure Container Apps is based on your plan type. For more information, see [Billing in Azure Container Apps](../container-apps/billing.md).|
For a direct cost comparison between dynamic hosting plans (Consumption, Flex Consumption, and Premium), see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). For pricing of the various Dedicated plan options, see the [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service). For pricing Container Apps hosting, see [Azure Container Apps pricing](https://azure.microsoft.com/pricing/details/container-apps/).
azure-functions Python Memory Profiler Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-memory-profiler-reference.md
Before you start developing a Python function app, you must meet these requireme
* An active Azure subscription. ## Memory profiling process
azure-functions Functions Cli Create App Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-app-service-plan.md
This Azure Functions sample script creates a function app, which is a container for your functions. The function app that is created uses a dedicated App Service plan, which means your server resources are always on. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app, which is a container
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Function App Connect To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md
This Azure Functions sample script creates a function app and connects the function to an Azure Cosmos DB database. It makes the connection using an Azure Cosmos DB endpoint and access key that it adds to app settings. The created app setting that contains the connection can be used with an [Azure Cosmos DB trigger or binding](../functions-bindings-cosmosdb.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app and connects the funct
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Function App Connect To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account.md
This Azure Functions sample script creates a function app and connects the function to an Azure Storage account. The created app setting that contains the storage connection string can be used with a [storage trigger or binding](../functions-bindings-storage-blob.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app and connects the funct
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Function App Github Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-github-continuous.md
This Azure Functions sample script creates a function app using the [Consumption plan](../consumption-plan.md), along with its related resources. The script also configures your function code for continuous deployment from a public GitHub repository. There is also commented out code for using a private GitHub repository. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app using the [Consumption
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-premium-plan.md
This Azure Functions sample script creates a function app, which is a container for your functions. The function app that is created uses a [scalable Premium plan](../functions-premium-plan.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app, which is a container
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Serverless Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-serverless-python.md
This Azure Functions sample script creates a function app, which is a container
>[!NOTE] >The function app created runs on Python version 3.9. Python version 3.7 and 3.8 are also supported by Azure Functions. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app, which is a container
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-serverless.md
This Azure Functions sample script creates a function app, which is a container for your functions. The function app is created using the [Consumption plan](../consumption-plan.md), which is ideal for event-driven serverless workloads. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app, which is a container
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Mount Files Storage Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-mount-files-storage-linux.md
This Azure Functions sample script creates a function app using the [Consumption
>[!NOTE] >The function app created runs on Python version 3.9. Azure Functions also [supports Python versions 3.7 and 3.8](../functions-reference-python.md#python-version). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app using the [Consumption
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 02/05/2023 Last updated : 06/26/2024 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: January 2024*
+*Last updated: June 2024*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | | [Microsoft Entra provisioning service](../../active-directory/app-provisioning/how-provisioning-works.md)| &#x2705; | &#x2705; | | [Microsoft Entra multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
-| [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; |
+| [Azure Health Data Services](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; |
-| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; |
+| [Microsoft Copilot Studio](/power-virtual-agents/) | &#x2705; | &#x2705; |
| [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | | [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: November 2023*
+*Last updated: June 2024*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md) and [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure OpenAI](../../ai-services/openai/index.yml) | &#x2705; | &#x2705; | | | |
| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Red Hat OpenShift](../../openshift/index.yml) | &#x2705; | &#x2705; | &#x2705; | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Microsoft Copilot Studio](/power-virtual-agents/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
This how-to guide helps you use Azure Pipelines to set up continuous integration
[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by development teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government. ## Prerequisites
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md
This article provides developer guidance for using Computer Vision, Face API, Te
## Prerequisites - Install and Configure [Azure PowerShell](/powershell/azure/install-azure-powershell) - Connect [PowerShell with Azure Government](documentation-government-get-started-connect-with-ps.md)
azure-government Documentation Government Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-extension.md
Last updated 08/31/2021
Azure [virtual machine (VM) extensions](../virtual-machines/extensions/features-windows.md) are small applications that provide post-deployment configuration and automation tasks on Azure VMs. ## Virtual machine extensions
azure-government Documentation Government Get Started Connect With Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-with-ps.md
Microsoft Azure Government delivers a dedicated cloud with world-class security
This quickstart shows how to use PowerShell to access and start managing resources in Azure Government. If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin. ## Prerequisites
azure-government Documentation Government Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-image-gallery.md
Last updated 08/31/2021
Microsoft Azure Government Marketplace provides a similar experience as Azure Marketplace. You can choose to deploy prebuilt images from Microsoft and our partners, or upload your own VHDs. This approach gives you the flexibility to deploy your own standardized images if needed. ## Images
azure-government Documentation Government Manage Oms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-manage-oms.md
Setting up this kind of environment can be challenging. Onboarding your fleet of
## Azure Monitor logs Azure Monitor logs, now available in Azure Government, uses hyperscale log search to quickly analyze your data and expose threats in your environment. This article focuses on using Azure Monitor logs that uses hyperscale log search to quickly analyze your data and expose threats in your environment. Azure Monitor logs can:
azure-linux Quickstart Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-cli.md
Get started with the Azure Linux Container Host by using the Azure CLI to deploy
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Use the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart - Bash](/azure/cloud-shell/quickstart). :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
azure-linux Quickstart Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-powershell.md
Get started with the Azure Linux Container Host by using Azure PowerShell to dep
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Use the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart](/azure/cloud-shell/quickstart). :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com"::: - If you're running PowerShell locally, install the `Az PowerShell` module and connect to your Azure account using the [`Connect-AzAccount`](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
azure-linux Quickstart Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-resource-manager-template.md
Last updated 04/18/2023
Get started with the Azure Linux Container Host by using an Azure Resource Manager (ARM) template to deploy an Azure Linux Container Host cluster. After installing the prerequisites, you'll create a SSH key pair, review the template, deploy the template and validate it, and then deploy an application. ## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Use the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart - Bash](/azure/cloud-shell/quickstart). :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
azure-linux Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-terraform.md
Get started with the Azure Linux Container Host using Terraform to deploy an Azu
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you haven't already configured Terraform, you can do so using one of the following options: - [Azure Cloud Shell with Bash](/azure/developer/terraform/get-started-cloud-shell-bash?tabs=bash)
azure-linux Tutorial Azure Linux Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-create-cluster.md
In later tutorials, you'll learn how to add an Azure Linux node pool to an exist
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). ## 1 - Install the Kubernetes CLI
azure-maps How To Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md
You can create your Azure Maps account using an Azure Resource Manager (ARM) template. After you have an account, you can implement the APIs in your website or mobile application. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
azure-maps Migrate Get Static Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-static-map.md
+
+ Title: Migrate Bing Maps Get a Static Map API to Azure Maps Get Map Static Image API
+
+description: Learn how to Migrate the Bing Maps Get a Static Map API to the Azure Maps Get Map Static Image API.
++ Last updated : 06/26/2024+++++
+# Migrate Bing Maps Get a Static Map API
+
+This article explains how to migrate the Bing Maps [Get a Static Map] API to the Azure Maps [Get Map Static Image] API. Azure Maps Get Map Static Image API renders a user-defined, rectangular Road, Satellite/Aerial, or Traffic style map image.
+
+## Prerequisites
+
+- An [Azure Account]
+- An [Azure Maps account]
+- A [subscription key] or other form of [Authentication with Azure Maps]
+
+## Notable differences
+
+- Bing Maps Get a Static Map API offers Road, Satellite/Aerial, Traffic, Streetside, Birds Eye and Ordnance Survey maps styles. Azure Maps Get Map Static Image API offers the same styles except for Streetside, Birds Eye and Ordnance Survey.
+- Bing Maps Get a Static Map API supports getting a static map using coordinates, street address or place name as the location input. Azure Maps Get Map Static Image API supports only coordinates as the location input.
+- Bing Maps Get a Static Map API supports getting a static map of a driving, walking, or transit route natively. Azure Maps Get Map Static Image API doesn't provide route map functionality natively.
+- Bing Maps Get a Static Map API provides static maps in PNG, JPEG and GIF image formats. Azure Maps Get Map Static Image API provides static maps in PNG and JPEG image formats.
+- Bing Maps Get a Static Map API supports XML and JSON response formats. Azure Maps Get Map Static Image API supports only JSON response format.
+- Bing Maps Get a Static Map API supports HTTP GET and POST requests. Azure Maps Get Map Static Image API supports HTTP GET requests.
+- Bing Maps Get a Static Map API uses coordinates in the latitude & longitude format. Azure Maps Get Map Static Image API uses coordinates in the longitude & latitude format, as defined in [GeoJSON].
+- Unlike Bing Maps for Enterprise, Azure Maps is a global service that supports specifying a geographic scope, which allows you to limit data residency to the European (EU) or United States (US) geographic areas (geos). All requests (including input data) are processed exclusively in the specified geographic area. For more information, see [Azure Maps service geographic scope].
+
+## Security and authentication
+
+Bing Maps for Enterprise only supports API key authentication. Azure Maps supports multiple ways to authenticate your API calls, such as a [subscription key](azure-maps-authentication.md#shared-key-authentication), [Microsoft Entra ID], and [Shared Access Signature (SAS) Token]. For more information on security and authentication in Azure Maps, See [Authentication with Azure Maps] and the [Security] section in the Azure Maps Get Map Static Image documentation.
+
+## Request parameters
+
+The following table lists the Bing Maps _Get a Static Map_ request parameters and the Azure Maps equivalent:
+
+| Bing Maps request parameter| Parameter Alias  | Azure Maps request parameter | Required in Azure Maps  | Azure Maps data type  | Description  |
+|-|||-|--|--|
+| centerPoint | | center | True (if not using bbox) | number[] | Bing Maps Get a Static Map API requires coordinates be in latitude & longitude format, whereas Azure Maps Get Map Static Image API requires longitude & latitude format, as defined in the [GeoJSON] format. <br><br>`longitude,latitude` range from [-90, 90]​. Note: Either `center` or `bbox` are required parameters. They're mutually exclusive. |
+| culture | c | language | FALSE | String | In Azure Maps Get Map Static Image API, this is the language in which search results should be returned and is specified in the Azure Maps [request header]. For more information, see [Supported Languages]. |
+| declutterPins | dcl | Not supported   | Not supported | Not supported | |
+| dpi | dir | Not supported | Not supported | Not supported | |
+| drawCurve | dv | Path | FALSE | String | |
+| fieldOfView | fov | Not supported | Not supported | Not supported | In Bing Maps, this parameter is used for `imagerySet` Birdseye, `BirdseyeWithLabels`, `BirdseyeV2`, `BirdseyeV2WithLabels`, `OrdnanceSurvey`, `Streetside`. Azure Maps doesn't support these maps styles. |
+| format | fmt | format | TRUE | String | Bing Maps Get a Static Map API provides static maps in PNG, JPEG and GIF image formats. Azure Maps Get Map Static Image API provides static maps in PNG and JPEG image formats. |
+| heading | | Not supported | Not supported | Not supported | In Bing Maps, this parameter is used for imagerySet Birdseye, BirdseyeWithLabels, BirdseyeV2, BirdseyeV2WithLabels, OrdnanceSurvey, Streetside. Azure Maps doesn't support these maps styles. |
+| highlightEntity | he | Not supported | Not supported | Not supported | In Bing Maps Get a Static Map API, this parameter is used to get a polygon of the location input (entity) displayed on the map natively. Azure Maps Get a Map Static Image API doesn't support this feature, however, you can get a polygon of a location (locality) from the Azure Maps [Get Polygon] API and then display that on the static map. |
+| imagerySet | | tilesetID | TRUE | [TilesetId] | |
+| mapArea | ma | bbox | True (if not using center) | number[] | A bounding box, defined by two longitudes and two latitudes, represents the four sides of a rectangular area on the Earth, in the format of `minLon, minLat, maxLon, maxLat`. <br><br>Note: Either `center` or `bbox` are required parameters. They're mutually exclusive. `bbox` shouldn’t be used with `height` or `width`. |
+| mapLayer | ml | trafficLayer | FALSE | TrafficTilesetId | Optional. If `TrafficLayer` is provided, it returns map image with corresponding traffic layer. For more information, see [tilesetId]. |
+| mapSize | ms | height | TRUE | integer int32 | |
+| | | width | | | |
+| mapMetadata | mmd | Not supported | Not supported | Not supported | |
+| orientation | dir | Not supported | Not supported | Not supported | In Bing Maps Get a Static Map API, this parameter is used for 'imagerySet' Birdseye, BirdseyeWithLabels, BirdseyeV2, BirdseyeV2WithLabels, OrdnanceSurvey, Streetside. Azure Maps doesn't support these maps styles |
+| pitch | | Not supported | Not supported | Not supported | In Bing Maps Get a Static Map API, this parameter is used for 'imagerySet' Birdseye, BirdseyeWithLabels, BirdseyeV2, BirdseyeV2WithLabels, OrdnanceSurvey, Streetside. Azure Maps doesn't support these maps styles |
+| pushpin | pp | pins | FALSE | String | In Bing Maps Get a Static Map API, an HTTP GET request is limited to 18 pins and an HTTP POST request is limited to 100 pins per static map. Azure Maps Get Map Static Image API HTTP GET request doesnΓÇÖt have a limit on the number of pins per static map. However, the number of pins supported on the static map is based on the maximum number of characters supported in the HTTP GET request. See Azure Maps Get Map Static Image API ΓÇÿpinsΓÇÖ parameter in [URI Parameters] for more details on pushpin support. |
+| query | | Not supported | Not supported | Not supported | Azure Maps Get Map Static Image API supports only coordinates as the location input, not street address or place name. Use the Azure Maps Get Geocoding API to convert a street address or place name to coordinates. |
+| Route Parameters: avoid | None | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: distanceBeforeFirstTurn | dbft | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: dateTime | dt | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: maxSolutions | maxSolns | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: optimize | optmz | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: timeType | tt | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: travelMode | None | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: waypoint.n | wp.n | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| style | st | Not supported | Not supported | Not supported | |
+| userRegion | ur | view | FALSE | String | A string that represents an [ISO 3166-1 Alpha-2 region/country code]. This alters geopolitical disputed borders and labels to align with the specified user region. By default, the View parameter is set to “Auto” even if not defined in the request. For more information, see [Supported Views]. |
+| zoomLevel | | Zoom | FALSE | String | Desired zoom level of the map. Zoom value must be in the range: 0-20 (inclusive). Default value is 12. |
+| highlightEntity | he | Not supported | Not supported | Not supported | In Bing Maps Get a Static Map API, this parameter is used to get a polygon of the location input (entity) displayed on the map natively. Azure Maps Get a Map Static Image API doesn't support this feature, however, you can get a polygon of a location (locality) from the Azure Maps [Get Polygon] API and then display that on the static map. |
+
+For more information about the Azure Maps Get Map Static Image API request parameters, see [URI Parameters].
+
+## Request examples
+
+Bing Maps _Get a Static Map_ API sample GET request:
+
+``` http
+https://dev.virtualearth.net/REST/v1/Imagery/Map/Road/51.504810,-0.113629/15?mapSize=500,500&pp=51.504810,-0.113629;45&key={BingMapsKey}
+```
+
+Azure Maps _Get Map Static Image_ API sample GET request:
+
+``` http
+https://atlas.microsoft.com/map/static?api-version=2024-04-01&tilesetId=microsoft.base.road&zoom=15&center=-0.113629,51.504810&subscription-key={Your-Azure-Maps-Subscription-key}
+```
+
+## Response examples
+
+The following screenshot shows what is returned in the body of the HTTP response when executing the Bing Maps _Get a Static Map_ request:
++
+The following JSON sample shows what is returned in the body of the HTTP response when executing an Azure Maps _Get Map Static Image_ request:
++
+## Transactions usage
+
+Like Bing Maps Get a Static Map API, Azure Maps Get Map Static Image API logs one billable transaction per request. For more information on Azure Maps transactions, see [Understanding Azure Maps Transactions].
+
+## Additional information
+
+- [Render custom data on a raster map]
+
+Support
+
+- [Microsoft Q&A Forum]
+
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[Azure Account]: https://azure.microsoft.com/
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps service geographic scope]: geographic-scope.md
+[GeoJSON]: https://geojson.org
+[Get a Static Map]: /bingmaps/rest-services/imagery/get-a-static-map
+[Get Map Static Image]: /rest/api/maps/render/get-map-static-image
+[Get Polygon]: /rest/api/maps/search/get-polygon
+[Get Route Directions]: /rest/api/maps/route/get-route-directions
+[ISO 3166-1 Alpha-2 region/country code]: https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2
+[Microsoft Entra ID]: azure-maps-authentication.md#microsoft-entra-authentication
+[Microsoft Q&A Forum]: /answers
+[Post Route Directions]: /rest/api/maps/route/post-route-directions
+[Render custom data on a raster map]: how-to-render-custom-data.md
+[request header]: /rest/api/maps/render/get-map-static-image?#request-headers
+[Security]: /rest/api/maps/render/get-map-static-image#security
+[Shared Access Signature (SAS) Token]: azure-maps-authentication.md#shared-access-signature-token-authentication
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Supported Languages]: supported-languages.md
+[Supported Views]: supported-languages.md#azure-maps-supported-views
+[TilesetId]: /rest/api/maps/render/get-map-static-image#tilesetid
+[Understanding Azure Maps Transactions]: understanding-azure-maps-transactions.md
+[URI Parameters]: /rest/api/maps/render/get-map-static-image#uri-parameters
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Use the Windows agent.
Perform the following steps to configure the Log Analytics agent for Windows to report to a System Center Operations Manager management group. 1. Sign on to the computer with an account that has administrative rights.
Perform the following steps to configure the Log Analytics agent for Windows to
Perform the following steps to configure the Log Analytics agent for Linux to report to a System Center Operations Manager management group. 1. Edit the file `/etc/opt/omi/conf/omiserver.conf`.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- JSON text must be contained in a single row for proper ingestion. The JSON body (file) format is not supported. - Optionally a Data Collection Endpoint if you plan to use Azure Monitor Private Links. The data collection endpoint must be in the same region as the Log Analytics workspace. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
- For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
-- - A Virtual Machine, Virtual Machine Scale Set, Arc-enabled server on-premises or Azure Monitoring Agent on a Windows on-premises client that writes logs to a text or JSON file.
azure-monitor Data Sources Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-collectd.md
A full list of available plugins can be found at [Table of Plugins](https://coll
The following CollectD configuration is included in the Log Analytics agent for Linux to route CollectD data to the Log Analytics agent for Linux. ```xml LoadPlugin write_http
azure-monitor Data Sources Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-json.md
# Collecting custom JSON data sources with the Log Analytics agent for Linux in Azure Monitor Custom JSON data sources can be collected into [Azure Monitor](../data-platform.md) using the Log Analytics agent for Linux. These custom data sources can be simple scripts returning JSON such as [curl](https://curl.haxx.se/) or one of [FluentD's 300+ plugins](https://www.fluentd.org/plugins/all). This article describes the configuration required for this data collection.
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/vmext-troubleshoot.md
If the Microsoft Monitoring Agent VM extension isn't installing or reporting, pe
For more information, see [Troubleshooting Windows extensions](../../virtual-machines/extensions/oms-windows.md). ## Troubleshoot the Linux VM extension If the Log Analytics agent for Linux VM extension isn't installing or reporting, perform the following steps to troubleshoot the issue: 1. If the extension status is **Unknown**, check if the Azure VM agent is installed and working correctly by reviewing the VM agent log file `/var/log/waagent.log`.
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
The current alert rule wizard is different from the earlier experience:
## Manage log search alerts using PowerShell Use the following PowerShell cmdlets to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules):
azure-monitor Alerts Metric Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md
# Create a metric alert in Azure Monitor Logs You can use metric alert capabilities on a predefined set of logs in Azure Monitor Logs. The monitored logs, which can be collected from Azure or on-premises computers, are converted to metrics and then monitored with metric alert rules, just like any other metric.
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
Log search alerts can measure two different things, which can be used for differ
- **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. - **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage.
-You can configure if log search alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview.
+You can configure if log search alerts are [stateful or stateless](alerts-overview.md#alerts-and-state).
Note that stateful log search alerts have these limitations: - they can trigger up to 300 alerts per evaluation. - you can have a maximum of 5000 alerts with the `fired` alert condition.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use. ## API summary
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
We've also provided manual download instructions in case you don't have internet
To get started, you need a connection string. For more information, see [Connection strings](sdk-connection-string.md). ### Run PowerShell as Admin with an elevated execution policy
This tab describes the following cmdlets, which are members of the [Az.Applicati
> - To get started, you need a connection string. For more information, see [Create a resource](create-workspace-resource.md). > - This cmdlet requires that you review and accept our license and privacy statement. > [!IMPORTANT] > This cmdlet requires a PowerShell session with Admin permissions and an elevated execution policy. For more information, see [Run PowerShell as administrator with an elevated execution policy](?tabs=detailed-instructions#run-powershell-as-admin-with-an-elevated-execution-policy).
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
We use an [MVC application](/aspnet/core/tutorials/first-mvc-app) example. If yo
An [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. For more information, see [OpenTelemetry overview](opentelemetry-overview.md). > [!NOTE] > If you want to use standalone ILogger provider, use [Microsoft.Extensions.Logging.ApplicationInsight](./ilogger.md).
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILog
> > ## Install logging on your app
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This procedure configures your ASP.NET web app to send telemetry to the [Applica
[!INCLUDE [azure-monitor-app-insights-otel-available-notification](../includes/azure-monitor-app-insights-otel-available-notification.md)] ## Prerequisites To add Application Insights to your ASP.NET website, you need to:
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core-based web applications running on [Azure App Service](../../app-service/index.yml) is now easier than ever. Previously, you needed to manually instrument your app. Now, the latest extension/agent is built into the App Service image by default. This article walks you through enabling Azure Monitor Application Insights monitoring. It also provides preliminary guidance for automating the process for large-scale deployments. ## Enable autoinstrumentation monitoring
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Enabling monitoring on your ASP.NET-based web applications running on [Azure App
If both autoinstrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This arrangement prevents duplicate data from being sent. To learn more, see the [Troubleshooting section](#troubleshooting). ## Enable autoinstrumentation monitoring
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
See connection string [code samples](sdk-connection-string.md#code-samples).
## InstrumentationKey This setting determines the Application Insights resource in which your data appears. Typically, you create a separate resource, with a separate key, for each of your applications.
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
You can group requests by logical `name` and define the `source` of this request
Request telemetry supports the standard extensibility model by using custom `properties` and `measurements`. ### Name
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
The following key properties are captured by default when the plug-in is enabled
Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web) SDK Loader Script or npm and then optionally add a framework extension. ### Add the code
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
npm install @microsoft/applicationinsights-angularplugin-js
### Add the extension to your code #### [React](#tab/react)
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
* [ASP.NET Core](opentelemetry-enable.md?tabs=aspnetcore): Enabled by default. * [Java](./opentelemetry-enable.md?tabs=java): Enabled by default. * [Node.js](opentelemetry-enable.md?tabs=nodejs): Enabled by default.
- * [Python](opentelemetry-enable.md?tabs=python): Enabled by default.
+ * [Python](opentelemetry-enable.md?tabs=python): Pass `enable_live_metrics=True` into `configure_azure_monitor`. See the [Azure Monitor OpenTelemetry Distro](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry#usage) documentation for more information.
# [Classic API](#tab/classic)
Live metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
-2. Open the Application Insigwhts resource for your application in the [Azure portal](https://portal.azure.com). Select **Live metrics**, which is listed under **Investigate** in the left hand menu.
+2. Open the Application Insights resource for your application in the [Azure portal](https://portal.azure.com). Select **Live metrics**, which is listed under **Investigate** in the left hand menu.
3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data like customer names in your filters. ## How do live metrics differ from metrics explorer and Log Analytics?
It's possible to try custom filters without having to set up an authenticated ch
| Azure Functions v2 | Supported | Supported | Supported | Supported | **Not supported** | | Java | Supported (V2.0.0+) | Supported (V2.0.0+) | **Not supported** | Supported (V3.2.0+) | **Not supported** | | Node.js | Supported (V1.3.0+) | Supported (V1.3.0+) | **Not supported** | Supported (V1.3.0+) | **Not supported** |
-| Python | **Not supported** | **Not supported** | **Not supported** | **Not supported** | **Not supported** |
+| Python | Supported (Distro Version 1.6.0+) | **Not supported** | **Not supported** | **Not supported** | **Not supported** |
Basic metrics include request, dependency, and exception rate. Performance metrics (performance counters) include memory and CPU. Sample telemetry shows a stream of detailed information for failed requests and dependencies, exceptions, events, and traces.
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Application Insights collects log, performance, and error data and automatically
The required Application Insights instrumentation is built into Azure Functions. All you need is a valid connection string to connect your function app to an Application Insights resource. The connection string should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have a connection string, you can set it manually. For more information, see [Monitor executions in Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd) and [Connection strings](sdk-connection-string.md). For a list of supported autoinstrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Before you begin, make sure that you have an Azure subscription, or [get a new o
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Create an [Application Insights resource](create-workspace-resource.md). ### <a name="sdk"></a> Set up the Node.js client library
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Point the Java virtual machine (JVM) to the jar file by adding `-javaagent:"path
> If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
-##### [Java-Native](#tab/java-native)
+##### [Java Native](#tab/java-native)
Several automatic instrumentations are enabled through configuration changes; no code changes are required
azure-monitor Opentelemetry Nodejs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-nodejs-migrate.md
This guide provides two options to upgrade from the Azure Monitor Application In
Remove all Application Insights instrumentation from your code. Delete any sections where the Application Insights client is initialized, modified, or called. 4. Enable Application Insights with the Azure Monitor OpenTelemetry Distro.-
+ > [!IMPORTANT]
+ > *Before* you import anything else, `useAzureMonitor` must be called. There might be telemetry loss if other libraries are imported first.
Follow [getting started](opentelemetry-enable.md?tabs=nodejs) to onboard to the Azure Monitor OpenTelemetry Distro. #### Azure Monitor OpenTelemetry Distro changes and limitations
-The APIs from the Application Insights SDK 2.X aren't available in the Azure Monitor OpenTelemetry Distro. You can access these APIs through a nonbreaking upgrade path in the Application Insights SDK 3.X.
+ * The APIs from the Application Insights SDK 2.X aren't available in the Azure Monitor OpenTelemetry Distro. You can access these APIs through a nonbreaking upgrade path in the Application Insights SDK 3.X.
+ * Filtering dependencies, logs, and exceptions by operation name is not yet supported.
## [Upgrade](#tab/upgrade)
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
# Manage Application Insights resources by using PowerShell This article shows you how to automate the creation and update of [Application Insights](./app-insights-overview.md) resources automatically by using Azure Resource Manager. You might, for example, do so as part of a build process. Along with the basic Application Insights resource, you can create [availability web tests](./availability-overview.md), set up [alerts](../alerts/alerts-log.md), set the [pricing scheme](../logs/cost-logs.md#application-insights-billing), and create other Azure resources.
More properties are available via the cmdlets:
See the [detailed documentation](/powershell/module/az.applicationinsights) for the parameters for these cmdlets. ## Set the data retention
azure-monitor Sampling Classic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling-classic-api.md
Insert a line like `samplingPercentage: 10,` before the instrumentation key:
appInsights.trackPageView(); </script> ``` For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Connection strings define where to send telemetry data.
Key-value pairs provide an easy way for users to define a prefix suffix combination for each Application Insights service or product. ## Scenario overview
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
Statsbeat supports EU Data Boundary for Application Insights resources in the fo
|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`| |Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Exception Type`| #### Attach Statsbeat
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
The best experience is obtained by installing Application Insights both in your
1. **Webpage code:** Use the JavaScript SDK to collect data from webpages. See [Get started with the JavaScript SDK](./javascript-sdk.md).
- [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)]
To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
The [Application Insights SDK for Worker Service](https://www.nuget.org/packages
You must have a valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Connection Strings](./sdk-connection-string.md). ## Use Application Insights SDK for Worker Service
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Container insights support viewing metrics stored in your Log Analytics workspac
### Why are log lines larger than 16 KB split into multiple records in Log Analytics?
-The agent uses the [Docker JSON file logging driver](https://docs.docker.com/config/containers/logging/json-file/) to capture the stdout and stderr of containers. This logging driver splits log lines [larger than 16 KB](https://github.com/moby/moby/pull/22982) into multiple lines when they're copied from stdout or stderr to a file.
+The agent uses the [Docker JSON file logging driver](https://docs.docker.com/config/containers/logging/json-file/) to capture the stdout and stderr of containers. This logging driver splits log lines [larger than 16 KB](https://github.com/moby/moby/pull/22982) into multiple lines when they're copied from stdout or stderr to a file. Use [Multi-line logging](./container-insights-logs-schema.md#multi-line-logging-in-container-insights) to get log record size up to 64KB.
## Next steps
azure-monitor Collect Custom Metrics Guestos Resource Manager Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md
Last updated 07/30/2023
# Send guest OS metrics to the Azure Monitor metric store by using an Azure Resource Manager template for a Windows virtual machine scale set By using the Azure Monitor [Azure Diagnostics extension for Windows (WAD)](../agents/diagnostics-extension-overview.md), you can collect metrics and logs from the guest operating system (guest OS) that runs as part of a virtual machine, cloud service, or Azure Service Fabric cluster. The extension can send telemetry to many different locations listed in the previously linked article.
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
Last updated 05/31/2024
# Send Guest OS metrics to the Azure Monitor metrics database for a Windows virtual machine (classic) The Azure Monitor [Diagnostics extension](../agents/diagnostics-extension-overview.md) (known as "WAD" or "Diagnostics") allows you to collect metrics and logs from the guest operating system (Guest OS) running as part of a virtual machine, cloud service, or Service Fabric cluster. The extension can send telemetry to [many different locations.](../data-platform.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
azure-monitor Collect Custom Metrics Guestos Vm Cloud Service Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md
Last updated 05/31/2024
# Send Guest OS metrics to the Azure Monitor metric store classic Cloud Services With the Azure Monitor [Diagnostics extension](../agents/diagnostics-extension-overview.md), you can collect metrics and logs from the guest operating system (Guest OS) running as part of a virtual machine, cloud service, or Service Fabric cluster. The extension can send telemetry to [many different locations.](../data-platform.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
azure-monitor Migrate To Batch Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-batch-api.md
description: How to migrate from the metrics API to the getBatch API
Previously updated : 03/11/2024 Last updated : 06/27/2024
In the `metrics:getBatch` error response, the error content is wrapped inside a
- Another common cause is specifying a filter that doesn't match any resources. For example, if the filter specifies a dimension value that doesn't exist on any resources in the subscription and region combination, `"timeseries": []` is returned. + Wildcard filters
- Using a wildcard filter such as `Microsoft.ResourceId eq '*'` causes the API to return a time series for every resourceId in the subscription and region. If the subscription and region combination contains no resources, an empty time series is returned. The same query without the wildcard filter would return a single time series, aggregating the requested metric over the requested dimensions, for example subscription and region. If there are no resources in the subscription and region combination, the API returns a single time series with a single data point of `0`.
-
+ Using a wildcard filter such as `Microsoft.ResourceId eq '*'` causes the API to return a time series for every resourceId in the subscription and region. If the subscription and region combination contains no resources, an empty time series is returned. The same query without the wildcard filter would return a single time series, aggregating the requested metric over the requested dimensions, for example subscription and region.
+ Custom metrics aren't currently supported. The `metrics:getBatch` API doesn't support querying custom metrics, or queries where the metric namespace name isn't a resource type. This is the case for VM Guest OS metrics that use the namespace "azure.vm.windows.guestmetrics" or "azure.vm.linux.guestmetrics".
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md
Title: Azure monitoring REST API walkthrough
description: How to authenticate requests and use the Azure Monitor REST API to retrieve available metric definitions, metric values, and activity logs. Previously updated : 03/11/2024 Last updated : 06/27/2024
After retrieving the metric definitions and dimension values, retrieve the metri
Use the metric's `name.value` element in the filter definitions. If no dimension filters are specified, the rolled up, aggregated metric is returned.
-To fetch multiple time series with specific dimension values, specify a filter query parameter that specifies both dimension values such as `"&$filter=ApiName eq 'ListContainers' or ApiName eq 'GetBlobServiceProperties'"`.
-
-To return a time series for every value of a given dimension, use an `*` filter such as `"&$filter=ApiName eq '*'"`. The `Top` and `OrderBy` query parameters can be used to limit and order the number of time series returned.
+### Multiple time series
+A time series is a set of data points that are ordered by time for a given combination of dimensions. A dimension is an aspect of the metric that describes the data point such as resource Id, region, or ApiName.
++ To fetch multiple time series with specific dimension values, specify a filter query parameter that specifies both dimension values such as `"&$filter=ApiName eq 'ListContainers' or ApiName eq 'GetBlobServiceProperties'"`. In this example, you get a time series where `ApiName` is `ListContainers` and a second time series where `ApiName` is `GetBlobServiceProperties`.++ To return a time series for every value of a given dimension, use an `*` filter such as `"&$filter=ApiName eq '*'"`. Use the `Top` and `OrderBy` query parameters to limit and sort the number of time series returned. In this example, you get a time series for every value of `ApiName`in the result set. If no data is returned, the API returns an empty time series `"timeseries": []`. > [!NOTE] > To retrieve multi-dimensional metric values using the Azure Monitor REST API, use the API version "2019-07-01" or later.
Below is an equivalent metrics request for multiple resources:
GET https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef012345/providers/microsoft.Insights/metrics?timespan=2023-06-25T22:20:00.000Z/2023-06-26T22:25:00.000Z&interval=PT5M&metricnames=Percentage CPU&aggregation=average&api-version=2021-05-01&region=eastus&metricNamespace=microsoft.compute/virtualmachines&$filter=Microsoft.ResourceId eq '*' ``` > [!NOTE]
-> A `Microsoft.ResourceId eq '*'` filter is added in the example for the multi resource metrics requests. The filter tells the API to return a separate time series per virtual machine resource in the subscription and region. Without the filter the API would return a single time series aggregating the average CPU for all VMs. The times series for each resource is differentiated by the `Microsoft.ResourceId` metadata value on each time series entry, as can be seen in the following sample return value. If there are no resourceIds retrieved by this query an empty time series`"timeseries": []` is returned.
+> A `Microsoft.ResourceId eq '*'` filter is added in the example for the multi resource metrics requests. The `*` filter tells the API to return a separate time series for each virtual machine resource that has data in the subscription and region. Without the filter the API would return a single time series aggregating the average CPU for all VMs. The times series for each resource is differentiated by the `Microsoft.ResourceId` metadata value on each time series entry, as can be seen in the following sample return value. If there are no resourceIds retrieved by this query an empty time series`"timeseries": []` is returned.
```JSON {
GET https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef0123
"resourceregion": "eastus" } ```-
+
### Troubleshooting querying metrics for multiple resources + Empty time series returned `"timeseries": []`
GET https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef0123
- Another common cause is specifying a filter that doesn't match any resources. For example, if the filter specifies a dimension value that doesn't exist on any resources in the subscription and region combination, `"timeseries": []` is returned. + Wildcard filters
- Using a wildcard filter such as `Microsoft.ResourceId eq '*'` causes the API to return a time series for every resourceId in the subscription and region. If the subscription and region combination contains no resources, an empty time series is returned. The same query without the wildcard filter would return a single time series, aggregating the requested metric over the requested dimensions, for example subscription and region. If there are no resources in the subscription and region combination, the API returns a single time series with a single data point of `0`.
+ Using a wildcard filter such as `Microsoft.ResourceId eq '*'` causes the API to return a time series for every resourceId in the subscription and region. If the subscription and region combination contains no resources, an empty time series is returned. The same query without the wildcard filter would return a single time series, aggregating the requested metric over the requested dimensions, for example subscription and region.
+ 401 authorization errors: The individual resource metrics APIs requires a user have the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) permission on the resource being queried. Because the multi resource metrics APIs are subscription level APIs, users must have the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) permission for the queried subscription to use the multi resource metrics APIs. Even if users have Monitoring Reader on all the resources in a subscription, the request fails if the user doesn't have Monitoring Reader on the subscription itself. - ## Next steps - Review the [overview of monitoring](../overview.md).
azure-monitor Code Optimizations Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations-troubleshoot.md
+
+ Title: Troubleshoot Code Optimizations (Preview)
+description: Learn how to use Application Insights Code Optimizations on Azure. View a checklist of troubleshooting steps.
++
+editor: v-jsitser
+++ Last updated : 06/25/2024+++
+# Troubleshoot Code Optimizations (Preview)
+
+This article provides troubleshooting steps and information to use Application Insights Code Optimizations for Microsoft Azure.
+
+## Troubleshooting checklist
+
+### Step 1: View a video about Code Optimizations setup
+
+View the following demonstration video to learn how to set up Code Optimizations correctly.
+
+> [!VIDEO https://www.youtube-nocookie.com/embed/vbi9YQgIgC8]
+
+### Step 2: Make sure that your app is connected to an Application Insights resource
+
+[Create an Application Insights resource](/azure/azure-monitor/app/create-workspace-resource) and verify that it's connected to the correct app.
+
+### Step 3: Verify that Application Insights Profiler is enabled
+
+[Enable Application Insights Profiler](/azure/azure-monitor/profiler/profiler-overview).
+
+### Step 4: Verify that Application Insights Profiler is collecting profiles
+
+To make sure that profiles are uploaded to your Application Insights resource, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Application Insights**.
+1. In the list of Application Insights resources, select the name of your resource.
+1. In the navigation pane of your Application Insights resource, locate the **Investigate** heading, and then select **Performance**.
+1. On the **Performance** page of your Application Insights resource, select **Profiler**:
+
+ :::image type="content" source="./media/code-optimizations-troubleshoot/performance-page.png" alt-text="Azure portal screenshot that shows how to navigate to the Application Insights Profiler.":::
+
+1. On the **Application Insights Profiler** page, view the **Recent profiling sessions** section.
+
+ :::image type="content" source="./media/code-optimizations-troubleshoot/profiling-sessions.png" alt-text="Azure portal screenshot of the Application Insights Profiler page." lightbox="./media/code-optimizations-troubleshoot/profiling-sessions.png":::
+
+ > [!NOTE]
+ > If you don't see any profiling sessions, see [Troubleshoot Application Insights Profiler](../profiler/profiler-troubleshooting.md).
+
+### Step 5: Regularly check the Profiler
+
+After you successfully complete the previous steps, keep checking the Profiler for insights. Meanwhile, the service continues to analyze your profiles and provide insights as soon as it detects any issues in your code. After you enable the Application Insights Profiler, several hours might be required for you to generate profiles and for the service to analyze them. If the service detects no issues in your code, a message appears that confirms that no insights were found.
+
+## Contact us for help
+
+If you have questions or need help, [create a support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview?DMC=troubleshoot), or ask [Azure community support](https://azure.microsoft.com/support/community). You can also submit product feedback to [Azure feedback community](https://feedback.azure.com/d365community).
azure-monitor Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations.md
Previously updated : 03/08/2024 Last updated : 06/25/2024
Get started with Code Optimizations by enabling the following features on your a
- [Application Insights](../app/create-workspace-resource.md) - [Application Insights Profiler](../profiler/profiler-overview.md)
-Running into issues? Check the [Troubleshooting guide](/troubleshoot/azure/azure-monitor/app-insights/code-optimizations-troubleshooting)
+Running into issues? Check the [Troubleshooting guide](./code-optimizations-troubleshoot.md)
azure-monitor View Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/view-code-optimizations.md
Previously updated : 03/05/2024 Last updated : 06/25/2024
You can also view a graph depicting a specific performance issue's impact and th
## Next steps > [!div class="nextstepaction"]
-> [Troubleshoot Code Optimizations](/troubleshoot/azure/azure-monitor/app-insights/code-optimizations-troubleshooting)
+> [Troubleshoot Code Optimizations](./code-optimizations-troubleshoot.md)
azure-monitor Computer Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/computer-groups.md
Last updated 03/14/2023
# Computer groups in Azure Monitor log queries Computer groups in Azure Monitor allow you to scope [log queries](./log-query-overview.md) to a particular set of computers. Each group is populated with computers using a query that you define. When the group is included in a log query, the results are limited to records that match the computers in the group. ## Permissions required
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
To perform cluster-related actions, you need these permissions:
For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md).
+## Resource Manager template samples
+
+This article includes sample [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/syntax.md) to create and configure Log Analytics clusters in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template.
++
+### Template references
+
+- [Microsoft.OperationalInsights clusters](/azure/templates/microsoft.operationalinsights/2020-03-01-preview/clusters)
+ ## Create a dedicated cluster Provide the following properties when creating new dedicated cluster:
Content-type: application/json
Should be 202 (Accepted) and a header.
+#### [ARM template (Bicep)](#tab/bicep)
+
+The following sample creates a new empty Log Analytics cluster.
+
+```bicep
+@description('Specify the name of the Log Analytics cluster.')
+param clusterName string
+
+@description('Specify the location of the resources.')
+param location string = resourceGroup().location
+
+@description('Specify the capacity reservation value.')
+@allowed([
+ 100
+ 200
+ 300
+ 400
+ 500
+ 1000
+ 2000
+ 5000
+])
+param CommitmentTier int
+
+@description('Specify the billing type settings. Can be \'Cluster\' (default) or \'Workspaces\' for proportional billing on workspaces.')
+@allowed([
+ 'Cluster'
+ 'Workspaces'
+])
+param billingType string
+
+resource cluster 'Microsoft.OperationalInsights/clusters@2021-06-01' = {
+ name: clusterName
+ location: location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ sku: {
+ name: 'CapacityReservation'
+ capacity: CommitmentTier
+ }
+ properties: {
+ billingType: billingType
+ }
+}
+```
+
+**Parameter file**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "CommitmentTier": {
+ "value": 500
+ },
+ "billingType": {
+ "value": "Cluster"
+ }
+ }
+}
+```
+
+#### [ARM template (JSON)](#tab/json)
+
+The following sample creates a new empty Log Analytics cluster.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the name of the Log Analytics cluster."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specify the location of the resources."
+ }
+ },
+ "CommitmentTier": {
+ "type": "int",
+ "allowedValues": [
+ 100,
+ 200,
+ 300,
+ 400,
+ 500,
+ 1000,
+ 2000,
+ 5000
+ ],
+ "metadata": {
+ "description": "Specify the capacity reservation value."
+ }
+ },
+ "billingType": {
+ "type": "string",
+ "allowedValues": [
+ "Cluster",
+ "Workspaces"
+ ],
+ "metadata": {
+ "description": "Specify the billing type settings. Can be 'Cluster' (default) or 'Workspaces' for proportional billing on workspaces."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.OperationalInsights/clusters",
+ "apiVersion": "2021-06-01",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "sku": {
+ "name": "CapacityReservation",
+ "capacity": "[parameters('CommitmentTier')]"
+ },
+ "properties": {
+ "billingType": "[parameters('billingType')]"
+ }
+ }
+ ]
+}
+```
+
+**Parameter file**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "CommitmentTier": {
+ "value": 500
+ },
+ "billingType": {
+ "value": "Cluster"
+ }
+ }
+}
+```
+ ### Check cluster provisioning status
Send a GET request on the cluster resource and look at the *provisioningState* v
The managed identity service generates the *principalId* GUID when you create the cluster.
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+
+#### [ARM template (JSON)](#tab/json)
+
+N/A
+ ## Link a workspace to a cluster
Select your cluster from **Log Analytics dedicated clusters** menu in the Azure
:::image type="content" source="./media/logs-dedicated-cluster/linked-workspaces.png" alt-text="Screenshot for linking workspaces to a dedicated cluster in the Azure portal." lightbox="./media/logs-dedicated-cluster/linked-workspaces.png"::: -- #### [CLI](#tab/cli) > [!NOTE]
Content-type: application/json
202 (Accepted) and header. -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+
+#### [ARM template (JSON)](#tab/json)
+
+N/A
+ ### Check workspace link status+ The workspace link operation can take up to 90 minutes to complete. You can check the status on both the linked workspaces and the cluster. When completed, the workspace resources will include `clusterResourceId` property under `features`, and the cluster will include linked workspaces under `associatedWorkspaces` section. When a cluster is configured with a customer managed key, data ingested to the workspaces after the link operation is complete will be stored encrypted with your key. - #### [Portal](#tab/azure-portal) On the **Overview** page for your dedicated cluster, select **JSON View**. The `associatedWorkspaces` section lists the workspaces linked to the cluster. :::image type="content" source="./media/logs-dedicated-cluster/associated-workspaces.png" alt-text="Screenshot for viewing associated workspaces for a dedicated cluster in the Azure portal." lightbox="./media/logs-dedicated-cluster/associated-workspaces.png"::: - #### [CLI](#tab/cli) ```azurecli
Authorization: Bearer <token>
} ``` -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+#### [ARM template (JSON)](#tab/json)
+
+N/A
++ ## Change cluster properties
After you create your cluster resource and it's fully provisioned, you can edit
- **keyVaultProperties** - Contains the key in Azure Key Vault with the following parameters: *KeyVaultUri*, *KeyName*, *KeyVersion*. See [Update cluster with Key identifier details](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details). - **Identity** - The identity used to authenticate to your Key Vault. This can be System-assigned or User-assigned. - **billingType** - Billing attribution for the cluster resource and its data. Includes on the following values:
- - **Cluster (default)**--The costs for your cluster are attributed to the cluster resource.
- - **Workspaces**--The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./cost-logs.md#dedicated-clusters) to learn more about the cluster pricing model.
-
+ - **Cluster (default)** - The costs for your cluster are attributed to the cluster resource.
+ - **Workspaces** - The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./cost-logs.md#dedicated-clusters) to learn more about the cluster pricing model.
>[!IMPORTANT] >Cluster update should not include both identity and key identifier details in the same operation. If you need to update both, the update should be in two consecutive operations.
+<!--
> [!NOTE] > The *billingType* property isn't supported in CLI.
+-->
-## Get all clusters in resource group
+#### [Portal](#tab/azure-portal)
+
+N/A
+
+#### [CLI](#tab/cli)
+
+The following sample updates the billing type.
+```azurecli
+az account set --subscription "cluster-subscription-id"
+
+az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --billing-type {Cluster, Workspaces}
+```
+
+#### [PowerShell](#tab/powershell)
+
+The following sample updates the billing type.
+
+```powershell
+Select-AzSubscription "cluster-subscription-id"
+
+Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -BillingType "Workspaces"
+```
+
+#### [REST API](#tab/restapi)
+
+The following sample updates the billing type.
+
+*Call*
+
+```rest
+PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2022-10-01
+Authorization: Bearer <token>
+Content-type: application/json
+
+{
+ "properties": {
+ "billingType": "Workspaces"
+ },
+ "location": "region"
+}
+```
+
+#### [ARM template (Bicep)](#tab/bicep)
+
+The following sample updates a Log Analytics cluster to use customer-managed key.
+
+```bicep
+@description('Specify the name of the Log Analytics cluster.')
+param clusterName string
+@description('Specify the location of the resources')
+param location string = resourceGroup().location
+@description('Specify the key vault name.')
+param keyVaultName string
+@description('Specify the key name.')
+param keyName string
+@description('Specify the key version. When empty, latest key version is used.')
+param keyVersion string
+var keyVaultUri = format('{0}{1}', keyVaultName, environment().suffixes.keyvaultDns)
+resource cluster 'Microsoft.OperationalInsights/clusters@2021-06-01' = {
+ name: clusterName
+ location: location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ keyVaultProperties: {
+ keyVaultUri: keyVaultUri
+ keyName: keyName
+ keyVersion: keyVersion
+ }
+ }
+}
+```
+
+**Parameter file**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "keyVaultUri": {
+ "value": "https://key-vault-name.vault.azure.net"
+ },
+ "keyName": {
+ "value": "MyKeyName"
+ },
+ "keyVersion": {
+ "value": ""
+ }
+ }
+}
+```
+
+#### [ARM template (JSON)](#tab/json)
+
+The following sample updates a Log Analytics cluster to use customer-managed key.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the name of the Log Analytics cluster."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specify the location of the resources"
+ }
+ },
+ "keyVaultName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the key vault name."
+ }
+ },
+ "keyName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the key name."
+ }
+ },
+ "keyVersion": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the key version. When empty, latest key version is used."
+ }
+ }
+ },
+ "variables": {
+ "keyVaultUri": "[format('{0}{1}', parameters('keyVaultName'), environment().suffixes.keyvaultDns)]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.OperationalInsights/clusters",
+ "apiVersion": "2021-06-01",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "keyVaultProperties": {
+ "keyVaultUri": "[variables('keyVaultUri')]",
+ "keyName": "[parameters('keyName')]",
+ "keyVersion": "[parameters('keyVersion')]"
+ }
+ }
+ }
+ ]
+}
+```
+
+**Parameter file**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "keyVaultUri": {
+ "value": "https://key-vault-name.vault.azure.net"
+ },
+ "keyName": {
+ "value": "MyKeyName"
+ },
+ "keyVersion": {
+ "value": ""
+ }
+ }
+}
+```
+++
+## Get all clusters in resource group
#### [Portal](#tab/azure-portal)
Authorization: Bearer <token>
} ``` -
+#### [ARM template (Bicep)](#tab/bicep)
+N/A
+#### [ARM template (JSON)](#tab/json)
+
+N/A
++ ## Get all clusters in subscription
From the **Log Analytics dedicated clusters** menu in the Azure portal, select t
:::image type="content" source="./media/logs-dedicated-cluster/subscription-clusters.png" alt-text="Screenshot for viewing all dedicated clusters in a subscription in the Azure portal." lightbox="./media/logs-dedicated-cluster/subscription-clusters.png"::: -- #### [CLI](#tab/cli) ```azurecli
Authorization: Bearer <token>
The same as for 'clusters in a resource group', but in subscription scope. -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+
+#### [ARM template (JSON)](#tab/json)
+
+N/A
+ ## Update commitment tier in cluster
Content-type: application/json
} ``` ---
-### Update billingType in cluster
-
-The *billingType* property determines the billing attribution for the cluster and its data:
-- *Cluster* (default) -- billing is attributed to the Cluster resource-- *Workspaces* -- billing is attributed to linked workspaces proportionally. When data volume from all linked workspaces is below Commitment Tier level, the bill for the remaining volume is attributed to the cluster-
-#### [Portal](#tab/azure-portal)
+#### [ARM template (Bicep)](#tab/bicep)
N/A
-#### [CLI](#tab/cli)
-
-```azurecli
-az account set --subscription "cluster-subscription-id"
-
-az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --billing-type {Cluster, Workspaces}
-```
-
-#### [PowerShell](#tab/powershell)
-
-```powershell
-Select-AzSubscription "cluster-subscription-id"
-
-Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -BillingType "Workspaces"
-```
-
-#### [REST API](#tab/restapi)
-
-*Call*
-
-```rest
-PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2022-10-01
-Authorization: Bearer <token>
-Content-type: application/json
+#### [ARM template (JSON)](#tab/json)
-{
- "properties": {
- "billingType": "Workspaces"
- },
- "location": "region"
-}
-```
+N/A
Select your cluster from **Log Analytics dedicated clusters** menu in the Azure
:::image type="content" source="./media/logs-dedicated-cluster/unlink-workspace.png" alt-text="Screenshot for unlinking a workspace from a dedicated cluster in the Azure portal." lightbox="./media/logs-dedicated-cluster/unlink-workspace.png"::: - #### [CLI](#tab/cli) ```azurecli
Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-nam
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/linkedServices/{linkedServiceName}?api-version=2020-08-01 ``` -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+
+#### [ARM template (JSON)](#tab/json)
+N/A
++ ## Delete cluster
Authorization: Bearer <token>
200 OK -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+#### [ARM template (JSON)](#tab/json)
+N/A
++ ## Limits and constraints
Authorization: Bearer <token>
## Next steps -- Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)-- Learn about [proper design of Log Analytics workspaces](../logs/workspace-design.md)
+- Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters).
+- Learn about [proper design of Log Analytics workspaces](../logs/workspace-design.md).
+- Get other [sample templates for Azure Monitor](../resource-manager-samples.md).
azure-monitor Personal Data Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/personal-data-mgmt.md
Log Analytics is a data store where personal data is likely to be found. Applica
In this article, _log data_ refers to data sent to a Log Analytics workspace, while _application data_ refers to data collected by Application Insights. If you're using a workspace-based Application Insights resource, the information on log data applies. If you're using a classic Application Insights resource, the application data applies. ## Strategy for personal data handling
azure-monitor Resource Manager Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/resource-manager-cluster.md
- Title: Resource Manager template samples for Log Analytics clusters
-description: Sample Azure Resource Manager templates to deploy Log Analytics clusters.
--- Previously updated : 06/13/2022--
-# Resource Manager template samples for Log Analytics clusters in Azure Monitor
-
-This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure Log Analytics clusters in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template.
--
-## Template references
--- [Microsoft.OperationalInsights clusters](/azure/templates/microsoft.operationalinsights/2020-03-01-preview/clusters)-
-## Create a Log Analytics cluster
-
-The following sample creates a new empty Log Analytics cluster.
-
-### Template file
-
-# [Bicep](#tab/bicep)
-
-```bicep
-@description('Specify the name of the Log Analytics cluster.')
-param clusterName string
-
-@description('Specify the location of the resources.')
-param location string = resourceGroup().location
-
-@description('Specify the capacity reservation value.')
-@allowed([
- 100
- 200
- 300
- 400
- 500
- 1000
- 2000
- 5000
-])
-param CommitmentTier int
-
-@description('Specify the billing type settings. Can be \'Cluster\' (default) or \'Workspaces\' for proportional billing on workspaces.')
-@allowed([
- 'Cluster'
- 'Workspaces'
-])
-param billingType string
-
-resource cluster 'Microsoft.OperationalInsights/clusters@2021-06-01' = {
- name: clusterName
- location: location
- identity: {
- type: 'SystemAssigned'
- }
- sku: {
- name: 'CapacityReservation'
- capacity: CommitmentTier
- }
- properties: {
- billingType: billingType
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "type": "string",
- "metadata": {
- "description": "Specify the name of the Log Analytics cluster."
- }
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]",
- "metadata": {
- "description": "Specify the location of the resources."
- }
- },
- "CommitmentTier": {
- "type": "int",
- "allowedValues": [
- 100,
- 200,
- 300,
- 400,
- 500,
- 1000,
- 2000,
- 5000
- ],
- "metadata": {
- "description": "Specify the capacity reservation value."
- }
- },
- "billingType": {
- "type": "string",
- "allowedValues": [
- "Cluster",
- "Workspaces"
- ],
- "metadata": {
- "description": "Specify the billing type settings. Can be 'Cluster' (default) or 'Workspaces' for proportional billing on workspaces."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.OperationalInsights/clusters",
- "apiVersion": "2021-06-01",
- "name": "[parameters('clusterName')]",
- "location": "[parameters('location')]",
- "identity": {
- "type": "SystemAssigned"
- },
- "sku": {
- "name": "CapacityReservation",
- "capacity": "[parameters('CommitmentTier')]"
- },
- "properties": {
- "billingType": "[parameters('billingType')]"
- }
- }
- ]
-}
-```
---
-### Parameter file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "value": "MyCluster"
- },
- "CommitmentTier": {
- "value": 500
- },
- "billingType": {
- "value": "Cluster"
- }
- }
-}
-```
-
-## Update a Log Analytics cluster
-
-The following sample updates a Log Analytics cluster to use customer-managed key.
-
-### Template file
-
-# [Bicep](#tab/bicep)
-
-```bicep
-@description('Specify the name of the Log Analytics cluster.')
-param clusterName string
-
-@description('Specify the location of the resources')
-param location string = resourceGroup().location
-
-@description('Specify the key vault name.')
-param keyVaultName string
-
-@description('Specify the key name.')
-param keyName string
-
-@description('Specify the key version. When empty, latest key version is used.')
-param keyVersion string
-
-var keyVaultUri = format('{0}{1}', keyVaultName, environment().suffixes.keyvaultDns)
-
-resource cluster 'Microsoft.OperationalInsights/clusters@2021-06-01' = {
- name: clusterName
- location: location
- identity: {
- type: 'SystemAssigned'
- }
- properties: {
- keyVaultProperties: {
- keyVaultUri: keyVaultUri
- keyName: keyName
- keyVersion: keyVersion
- }
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "type": "string",
- "metadata": {
- "description": "Specify the name of the Log Analytics cluster."
- }
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]",
- "metadata": {
- "description": "Specify the location of the resources"
- }
- },
- "keyVaultName": {
- "type": "string",
- "metadata": {
- "description": "Specify the key vault name."
- }
- },
- "keyName": {
- "type": "string",
- "metadata": {
- "description": "Specify the key name."
- }
- },
- "keyVersion": {
- "type": "string",
- "metadata": {
- "description": "Specify the key version. When empty, latest key version is used."
- }
- }
- },
- "variables": {
- "keyVaultUri": "[format('{0}{1}', parameters('keyVaultName'), environment().suffixes.keyvaultDns)]"
- },
- "resources": [
- {
- "type": "Microsoft.OperationalInsights/clusters",
- "apiVersion": "2021-06-01",
- "name": "[parameters('clusterName')]",
- "location": "[parameters('location')]",
- "identity": {
- "type": "SystemAssigned"
- },
- "properties": {
- "keyVaultProperties": {
- "keyVaultUri": "[variables('keyVaultUri')]",
- "keyName": "[parameters('keyName')]",
- "keyVersion": "[parameters('keyVersion')]"
- }
- }
- }
- ]
-}
-```
---
-### Parameter file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "value": "MyCluster"
- },
- "keyVaultUri": {
- "value": "https://key-vault-name.vault.azure.net"
- },
- "keyName": {
- "value": "MyKeyName"
- },
- "keyVersion": {
- "value": ""
- }
- }
-}
-```
-
-## Next steps
--- [Get other sample templates for Azure Monitor](../resource-manager-samples.md).-- [Learn more about Log Analytics dedicated clusters](./logs-dedicated-clusters.md).-- [Learn more about agent data sources](../agents/agent-data-sources.md).
azure-monitor Summary Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/summary-rules.md
A summary rule lets you aggregate log data at a regular cadence and send the agg
This article describes how summary rules work and how to define and view summary rules, and provides some examples of the use and benefits of summary rules.
-## Permissions required
-
-| Action | Permissions required |
-| | |
-| Create or update summary rule | `Microsoft.Operationalinsights/workspaces/summarylogs/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](manage-access.md#log-analytics-contributor), for example |
-| Create or update destination table | `Microsoft.OperationalInsights/workspaces/tables/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](manage-access.md#log-analytics-contributor), for example |
-| Enable query in workspace | `Microsoft.OperationalInsights/workspaces/query/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
-| Query all logs in workspace | `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
-| Query logs in table | `Microsoft.OperationalInsights/workspaces/query/<table>/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
-| Query logs in table (table action) | `Microsoft.OperationalInsights/workspaces/tables/query/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
-| Use queries encrypted in a customer-managed storage account|`Microsoft.Storage/storageAccounts/*` permissions to the storage account, as provided by the [Storage Account Contributor built-in role](/azure/role-based-access-control/built-in-roles/storage#storage-account-contributor), for example|
-- ## How summary rules work Summary rules perform batch processing directly in your Log Analytics workspace. The summary rule aggregates chunks of data, defined by bin size, based on a KQL query, and reingests the summarized results into a custom table with an [Analytics log plan](basic-logs-configure.md) in your Log Analytics workspace.
Here's the aggregated data that the summary rule sends to the destination table:
Instead of logging hundreds of similar entries within an hour, the destination table shows the count of each unique entry, as defined in the KQL query. Set the [Basic data plan](basic-logs-configure.md) on the `ContainerLogsV2` table for cheap retention of the raw data, and use the summarized data in the destination table for your analysis needs.
+## Permissions required
+
+| Action | Permissions required |
+| | |
+| Create or update summary rule | `Microsoft.Operationalinsights/workspaces/summarylogs/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](manage-access.md#log-analytics-contributor), for example |
+| Create or update destination table | `Microsoft.OperationalInsights/workspaces/tables/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](manage-access.md#log-analytics-contributor), for example |
+| Enable query in workspace | `Microsoft.OperationalInsights/workspaces/query/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
+| Query all logs in workspace | `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
+| Query logs in table | `Microsoft.OperationalInsights/workspaces/query/<table>/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
+| Query logs in table (table action) | `Microsoft.OperationalInsights/workspaces/tables/query/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
+| Use queries encrypted in a customer-managed storage account|`Microsoft.Storage/storageAccounts/*` permissions to the storage account, as provided by the [Storage Account Contributor built-in role](/azure/role-based-access-control/built-in-roles/storage#storage-account-contributor), for example|
++ ## Restrictions and limitations | Category | Limit |
Instead of logging hundreds of similar entries within an hour, the destination t
## Pricing model
-The cost you incur for summary rules consists of the cost of the query on the source table and the cost of ingesting the results to the destination table:
+There is no direct cost using Summary rules, and cost you incur consists of the cost of the query on the source table and the cost of ingesting the results to the destination table:
| Source table plan | Query cost | Query results ingestion cost | | | | |
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pr
## Create or update a summary rule
-Before you create a rule, experiment with the query in [Log Analytics](log-analytics-overview.md). Verify that the query doesn't reach or near the query limit. Check that the query produces the intended schema and expected results. If the query is close to the query limits, consider using a smaller `binSize` to process less data per bin. You can also modify the query to return fewer records or remove fields with higher volume.
+Before you create a rule, experiment with the query in [Log Analytics](log-analytics-overview.md). Verify that the query doesn't reach or near the query limit. Check that the query produces the intended schema and expected results. If the query is close to the query limits, consider using a smaller `binSize` to process less data per bin. You can also modify the query to return fewer records or remove fields with higher volume.
+
+> [!NOTE]
+> Summary rules are most beneficial in term of cost and results consumption when reduced significantly. For example, results volume is 0.01% or less than source.
When you update a query and remove output fields from the results set, Azure Monitor doesn't automatically remove the columns from the destination table. You need to [delete columns from your table](create-custom-table.md#add-or-delete-a-custom-column) manually.
If you don't need the summary results in the destination table, delete the rule
The destination table schema is defined when you create or update a summary rule. If the query in the summary rule includes operators that allow output schema expansion based on incoming data - for example, if the query uses the `arg_max(expression, *)` function - Azure Monitor doesn't add new columns to the destination table after you create or update the summary rule, and the output data that requires these columns will be dropped. To add the new fields to the destination table, [update the summary rule](#create-or-update-a-summary-rule) or [add a column to your table manually](create-custom-table.md#add-or-delete-a-custom-column).
-### Deleted data remains in workspace, subject to retention period
+### Data for removed columns remains in workspace, subject to retention period
-When you [delete columns or a custom log table](create-custom-table.md), data remains in the workspace and is subjected to the [retention period](data-retention-archive.md) defined on the table or workspace. During the retention period, if you create a table with the same name and fields, Azure Monitor recreates the table with the old data. To delete old data, [update the table retention period](/rest/api/loganalytics/tables/update) with the minimum retention supported (four days) and then delete the table.
+When you remove columns in query, the columns and data remain in destination table and is subjected to the [retention period](data-retention-archive.md) defined on the table or workspace. If the removed columns aren't needed in destination table, [Update schema and remove columns](create-custom-table.md#add-or-delete-a-custom-column) accordingly. During the retention period, if you add columns with the same name, old data that hasn't reached retention policy, shows up.
## Related content
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-cloudservice.md
Deploy your service with the new Diagnostics configuration. Application Insights
> [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md)
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-servicefabric.md
After you enable Application Insights, redeploy your application.
> [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md)
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-trackrequests.md
To manually track requests:
} ``` ## Next steps
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-vm.md
# Enable Profiler for web apps on an Azure virtual machine In this article, you learn how to run Application Insights Profiler on your Azure virtual machine (VM) or Azure virtual machine scale set via three different methods:
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
New-AzRoleDefinition -Role $role
## Assign a role To assign a role, see [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
If that doesn't solve the problem, then refer to the following manual troublesho
Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the *ApplicationInsights.config* file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal. ## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
internal class LoggerExample
> [!NOTE] > By default, the Application Insights Logger (`ApplicationInsightsLoggerProvider`) forwards exceptions to the Snapshot Debugger via `TelemetryClient.TrackException`. This behavior is controlled via the `TrackExceptionsAsExceptionTelemetry` property on the `ApplicationInsightsLoggerOptions` class. If you set `TrackExceptionsAsExceptionTelemetry` to `false` when configuring the Application Insights Logger, then the preceding example will not trigger the Snapshot Debugger. In this case, modify your code to call `TrackException` manually. ## Next steps
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
This section contains the release notes for the `Microsoft.ApplicationInsights.S
For bug reports and feedback, [open an issue on GitHub](https://github.com/microsoft/ApplicationInsights-SnapshotCollector). ### [1.4.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.6) A point release to address a regression when using .NET 8 applications.
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
To provide accurate and efficient troubleshooting capabilities, the Map feature
For more information about data collection and usage, see the [Microsoft Online Services Privacy Statement](https://go.microsoft.com/fwlink/?LinkId=512132). ## Next steps
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Use the Azure portal, PowerShell, or the Azure CLI to [register for NetApp Resou
# [Template](#tab/template) The following code snippet shows how to create a NetApp account in an Azure Resource Manager template (ARM template), using the [Microsoft.NetApp/netAppAccounts](/azure/templates/microsoft.netapp/netappaccounts) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
The following code snippet shows how to create a NetApp account in an Azure Reso
# [Template](#tab/template)
-<!-- [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -->
+<!-- [!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)] -->
The following code snippet shows how to create a capacity pool in an Azure Resource Manager template (ARM template), using the [Microsoft.NetApp/netAppAccounts/capacityPools](/azure/templates/microsoft.netapp/netappaccounts/capacitypools) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
The following code snippet shows how to create a capacity pool in an Azure Resou
# [Template](#tab/template)
-<!-- [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -->
+<!-- [!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)] -->
The following code snippets show how to set up a VNet and create an Azure NetApp Files volume in an Azure Resource Manager template (ARM template). VNet setup uses the [Microsoft.Network/virtualNetworks](/azure/templates/Microsoft.Network/virtualNetworks) resource. Volume creation uses the [Microsoft.NetApp/netAppAccounts/capacityPools/volumes](/azure/templates/microsoft.netapp/netappaccounts/capacitypools/volumes) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
azure-portal Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-bicep.md
Last updated 12/11/2023
A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and organized view of your cloud resources. This quickstart shows how to deploy a Bicep file to create a dashboard. The example dashboard shows the performance of a virtual machine (VM), along with some static information and links. ## Prerequisites
azure-portal Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-template.md
Last updated 12/11/2023
A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and organized view of your cloud resources. This quickstart shows how to deploy an Azure Resource Manager template (ARM template) to create a dashboard. The example dashboard shows the performance of a virtual machine (VM), along with some static information and links. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal, where you can edit the details (such as the VM used in the dashboard) before you deploy.
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-powershell.md
A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and o
- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell). ## Choose a specific Azure subscription
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Information about your custom settings is stored in Azure. You can delete the fo
It's a good idea to export and review your settings before you delete them, as described in the previous section. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming. To delete your portal settings, select **Delete all settings and private dashboards** from the top of **My information**. You'll be prompted to confirm the deletion. When you do so, all settings customizations will return to the default settings, and all of your private dashboards will be lost.
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file
description: Describes the configuration file for your Bicep deployments Previously updated : 06/03/2024 Last updated : 06/27/2024 # Configure your Bicep environment
The [Bicep linter](linter.md) checks Bicep files for syntax errors and best prac
You can enable experimental features by adding the following section to your `bicepconfig.json` file.
-Here's an example of enabling features 'compileTimeImports' and 'userDefinedFunctions`.
+Here's an example of enabling features 'assertions' and 'testFramework`.
```json {
azure-resource-manager Bicep Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-deployment.md
Title: Bicep functions - deployment
description: Describes the functions to use in a Bicep file to retrieve deployment information. Previously updated : 03/20/2024 Last updated : 06/26/2024 # Deployment functions for Bicep
The preceding example returns the following object:
`environment()`
-Returns information about the Azure environment used for deployment.
+Returns information about the Azure environment used for deployment. The `environment()` function is not aware of resource configurations. It can only return a single default DNS suffix for each resource type.
Namespace: [az](bicep-functions.md#namespaces-for-functions).
azure-resource-manager Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md
To create a deployment stack at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-New-AzManagmentGroupDeploymentStack `
+New-AzManagementGroupDeploymentStack `
-Name "<deployment-stack-name>" ` -Location "<location>" ` -TemplateFile "<bicep-file-name>" `
To update a deployment stack at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-Set-AzManagmentGroupDeploymentStack `
+Set-AzManagementGroupDeploymentStack `
-Name "<deployment-stack-name>" ` -Location "<location>" ` -TemplateFile "<bicep-file-name>" `
To apply deny settings at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-New-AzManagmentGroupDeploymentStack `
+New-AzManagementGroupDeploymentStack `
-Name "<deployment-stack-name>" ` -Location "<location>" ` -TemplateFile "<bicep-file-name>" `
To export a deployment stack at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-Save-AzManagmentGroupDeploymentStack `
+Save-AzManagementGroupDeploymentStack `
-Name "<deployment-stack-name>" ` -ManagementGroupId "<management-group-id>" ```
azure-resource-manager Linter Rule No Deployments Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-deployments-resources.md
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
} ```
-Additionally, you can also refence ARM templates using the [module](./modules.md) statement.
+Additionally, you can also reference ARM templates using the [module](./modules.md) statement.
_main.bicep_:
azure-resource-manager Publish Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-managed-identity.md
When you link the deployment of the managed application to existing resources, b
"type": "Microsoft.Common.TextBox", "label": "Network interface resource ID", "defaultValue": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testRG/providers/Microsoft.Network/networkInterfaces/existingnetworkinterface",
- "toolTip": "Must represent the identity as an Azure Resource Manager resource identifer format ex. /subscriptions/sub1/resourcegroups/myGroup/providers/Microsoft.Network/networkInterfaces/networkinterface1",
+ "toolTip": "Must represent the identity as an Azure Resource Manager resource identifier format ex. /subscriptions/sub1/resourcegroups/myGroup/providers/Microsoft.Network/networkInterfaces/networkinterface1",
"visible": true }, {
When you link the deployment of the managed application to existing resources, b
"type": "Microsoft.Common.TextBox", "label": "User-assigned managed identity resource ID", "defaultValue": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testRG/providers/Microsoft.ManagedIdentity/userassignedidentites/myuserassignedidentity",
- "toolTip": "Must represent the identity as an Azure Resource Manager resource identifer format ex. /subscriptions/sub1/resourcegroups/myGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity1",
+ "toolTip": "Must represent the identity as an Azure Resource Manager resource identifier format ex. /subscriptions/sub1/resourcegroups/myGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity1",
"visible": true } ]
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Pricing tiers determine the capacity and limits of your search service. Tiers in
**Limits per subscription** **Limits per search service** To learn more about limits on a more granular level, such as document size, queries per second, keys, requests, and responses, see [Service limits in Azure AI Search](../../search/search-limits-quotas-capacity.md).
The following table details the features and limits of the Basic, Standard, and
## Key Vault limits ## Managed identity limits
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
Last updated 03/19/2024
Learn how to use the [Azure portal](https://portal.azure.com) with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using the Azure portal](manage-resources-portal.md). ## What is a resource group
azure-resource-manager Manage Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-portal.md
Last updated 03/19/2024
Learn how to use the [Azure portal](https://portal.azure.com) with [Azure Resource Manager](overview.md) to manage your Azure resources. For managing resource groups, see [Manage Azure resource groups by using the Azure portal](manage-resource-groups-portal.md). ## Deploy resources to a resource group
azure-resource-manager Manage Resources Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-rest.md
Learn how to use the REST API for [Azure Resource Manager](overview.md) to manag
## Obtain an access token To make a REST API call to Azure, you first need to obtain an access token. Include this access token in the headers of your Azure REST API calls using the "Authorization" header and setting the value to "Bearer {access-token}".
-If you need to programatically retrieve new tokens as part of your application, you can obtain an access token by [Registering your client application with Microsoft Entra ID](/rest/api/azure/#register-your-client-application-with-azure-ad).
+If you need to programmatically retrieve new tokens as part of your application, you can obtain an access token by [Registering your client application with Microsoft Entra ID](/rest/api/azure/#register-your-client-application-with-azure-ad).
If you are getting started and want to test Azure REST APIs using your individual token, you can retrieve your current access token quickly with either Azure PowerShell or Azure CLI.
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 06/13/2024 Last updated : 06/27/2024 # Move operation support for resources
Before starting your move operation, review the [checklist](./move-resource-grou
> | trafficmanagerusermetricskeys | No | No | No | > | virtualhubs | No | No | No | > | virtualnetworkgateways | No| No | No |
-> | virtualnetworks | **Yes** | **Yes** | No |
+> | virtualnetworks | **Yes** | **Yes** | **Yes** |
> | virtualnetworktaps | No | No | No | > | virtualrouters | **Yes** | **Yes** | No | > | virtualwans | No | No |
azure-resource-manager Request Limits And Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/request-limits-and-throttling.md
Title: Request limits and throttling description: Describes how to use throttling with Azure Resource Manager requests when subscription limits are reached. Previously updated : 03/15/2024 Last updated : 06/27/2024
msrest.http_logger : 'x-ms-ratelimit-remaining-subscription-writes': '1199'
## Next steps
-* For a complete PowerShell example, see [Check Resource Manager Limits for a Subscription](https://github.com/Microsoft/csa-misc-utils/tree/master/psh-GetArmLimitsViaAPI).
* For more information about limits and quotas, see [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). * To learn about handling asynchronous REST requests, see [Track asynchronous Azure operations](async-operations.md).
azure-resource-manager Resource Manager Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-manager-personal-data.md
Last updated 03/19/2024
To avoid exposing sensitive information, delete any personal information you may have provided in deployments, resource groups, or tags. Azure Resource Manager provides operations that let you manage personal data you may have provided in deployments, resource groups, or tags. ## Delete personal data in deployment history
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 05/20/2024 Last updated : 06/26/2024 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | locks | scope of assignment | 1-90 | Alphanumerics, periods, underscores, hyphens, and parenthesis.<br><br>Can't end in period. |
-> | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
-> | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>*%&:\?+/` or control characters. <br><br>Can't end with period or space. |
-> | policyExemptions | scope of exemption | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
-> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
+> | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>%&:\?/` or control characters. <br><br>Can't end with period or space. |
+> | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>%&:\?/` or control characters. <br><br>Can't end with period or space. |
+> | policyExemptions | scope of exemption | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>%&:\?/` or control characters. <br><br>Can't end with period or space. |
+> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>%&:\?/` or control characters. <br><br>Can't end with period or space. |
> | roleAssignments | tenant | 36 | Must be a globally unique identifier (GUID). | > | roleDefinitions | tenant | 36 | Must be a globally unique identifier (GUID). |
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Resource tags support all cost-accruing services. To ensure that cost-accruing s
> > Tag values are case-sensitive. ## Required access
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | registries / models / versions | No | No | > | virtualclusters | Yes | Yes | > | workspaces | Yes | Yes |
-> | workspaces / batchEndpoints | Yes | No |
+> | workspaces / batchEndpoints | Yes | Yes |
> | workspaces / batchEndpoints / deployments | Yes | Yes |
-> | workspaces / batchEndpoints / deployments / jobs | No | No |
-> | workspaces / batchEndpoints / jobs | No | No |
+> | workspaces / batchEndpoints / deployments / jobs | No | Yes |
+> | workspaces / batchEndpoints / jobs | No | Yes |
> | workspaces / codes | No | No | > | workspaces / codes / versions | No | No | > | workspaces / components | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / schedules | No | No | > | workspaces / services | No | No |
-> [!NOTE]
-> Workspace tags don't propagate to compute clusters and compute instances. It is not supported with tracking cost at cluster/batch endpoint level.
- ## Microsoft.Maintenance > [!div class="mx-tableFixed"]
azure-resource-manager Template Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-deployment.md
Title: Template functions - deployment
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve deployment information. Previously updated : 08/22/2023 Last updated : 06/26/2024 # Deployment functions for ARM templates
For a subscription deployment, the following example returns a deployment object
`environment()`
-Returns information about the Azure environment used for deployment.
+Returns information about the Azure environment used for deployment. The `environment()` function is not aware of resource configurations. It can only return a single default DNS suffix for each resource type.
In Bicep, use the [environment](../bicep/bicep-functions-deployment.md#environment) function.
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
If your error code isn't listed, submit a GitHub issue. On the right side of the
| MissingSubscriptionRegistration | Register your subscription with the resource provider. | [Resolve registration](error-register-resource-provider.md) | | NoRegisteredProviderFound | Check resource provider registration status. | [Resolve registration](error-register-resource-provider.md) | | NotFound | You might be attempting to deploy a dependent resource in parallel with a parent resource. Check if you need to add a dependency. | [Resolve dependencies](error-not-found.md) |
-| OperationNotAllowed | There can be several reasons for this error message.<br><br>1. The deployment is attempting an operation which is not allowed on spcecified SKU.<br><br>2. The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
+| OperationNotAllowed | There can be several reasons for this error message.<br><br>1. The deployment is attempting an operation which is not allowed on specified SKU.<br><br>2. The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
| OperationNotAllowedOnVMImageAsVMsBeingProvisioned | You might be attempting to delete an image that is currently being used to provision VMs. You cannot delete an image that is being used by any virtual machine during the deployment process. Retry the image delete operation after the deployment of the VM is complete. | | | ParentResourceNotFound | Make sure a parent resource exists before creating the child resources. | [Resolve parent resource](error-parent-resource.md) | | PasswordTooLong | You might have selected a password with too many characters, or converted your password value to a secure string before passing it as a parameter. If the template includes a **secure string** parameter, you don't need to convert the value to a secure string. Provide the password value as text. | |
azure-resource-manager Quickstart Troubleshoot Arm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md
This quickstart describes how to troubleshoot Azure Resource Manager template (ARM template) JSON deployment errors. You'll set up a template with errors and learn how to fix the errors. There are three types of errors that are related to a deployment:
azure-signalr Signalr Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-service.md
This sample script creates a new Azure SignalR Service resource in a new resource group with a random name. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample script creates a new Azure SignalR Service resource in a new resourc
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-signalr Signalr Cli Create With App Service Github Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-with-app-service-github-oauth.md
This sample script creates a new Azure SignalR Service resource, which is used to push real-time content updates to clients. This script also adds a new Web App and App Service plan to host your ASP.NET Core Web App that uses the SignalR Service. The web app is configured with app settings to connect to the new SignalR service resource, and authenticate with [GitHub authentication](https://developer.github.com/v3/guides/basics-of-authentication/). The web app is also configured to use a local git repository deployment source. ## Sample scripts ### Create the SignalR service with an App service
This sample script creates a new Azure SignalR Service resource, which is used t
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-signalr Signalr Cli Create With App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-with-app-service.md
This sample script creates a new Azure SignalR Service resource, which is used to push real-time content updates to clients. This script also adds a new Web App and App Service plan to host your ASP.NET Core Web App that uses the SignalR Service. The web app is configured with an App Setting named *AzureSignalRConnectionString* to connect to the new SignalR service resource. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample script creates a new Azure SignalR Service resource, which is used t
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
In this tutorial, you learn how to:
> - Add an authentication controller to support GitHub authentication > - Deploy your ASP.NET Core web app to Azure ## Prerequisites
azure-signalr Signalr Howto Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-event-grid-integration.md
Azure Event Grid is a fully managed event routing service that provides uniform event consumption using a pub-sub model. In this guide, you use the Azure CLI to create an Azure SignalR Service, subscribe to connection events, then deploy a sample web application to receive the events. Finally, you can connect and disconnect and see the event payload in the sample application. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-signalr Signalr Quickstart Azure Signalr Service Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md
This quickstart walks you through the process of creating an Azure SignalR Service using an Azure Resource Manager (ARM) template. You can deploy the Azure SignalR Service through the Azure portal, PowerShell, or CLI. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal once you sign in.
azure-signalr Signalr Quickstart Azure Signalr Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-bicep.md
This quickstart describes how to use Bicep to create an Azure SignalR Service using Azure CLI or PowerShell. ## Prerequisites
azure-signalr Signalr Quickstart Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-rest-api.md
This quickstart can be run on macOS, Windows, or Linux.
* [.NET Core SDK](https://dotnet.microsoft.com/download) * A text editor or code editor of your choice. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsapi).
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 6/7/2024 Last updated : 6/27/2024 # Known issues: Azure VMware Solution
Refer to the table to find details about resolution dates or possible workaround
| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than four clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue should get detected through Microsoft, however you can also open a support request. | 2023 | | When I build a VMware HCX Service Mesh with the Enterprise license, the Replication Assisted vMotion Migration option isn't available. | 2023 | The default VMware HCX Compute Profile doesn't have the Replication Assisted vMotion Migration option enabled. From the Azure VMware Solution vSphere Client, select the VMware HCX option and edit the default Compute Profile to enable Replication Assisted vMotion Migration. | 2023 | | [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. Azure VMware Solution is currently rolling out [7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) to address this issue. | March 2024 - Resolved in [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) |
-| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | Use AV36, AV36P, or AV52 SKUs when RAID-6 FTT2 or RAID-1 FTT3 storage policies are needed. | N/A |
+| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | The AV64 SKU now supports 7 Fault Domains and all vSAN storage policies. For more information, see [AV64 supported Azure regions](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) | June 2024 |
| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you are using NE appliances in a HA configuration. | Feb 2024 - Resolved in [VMware HCX 4.8.2](https://docs.vmware.com/en/VMware-HCX/4.8.2/rn/vmware-hcx-482-release-notes/https://docsupdatetracker.net/index.html) | | [VMSA-2024-0006](https://www.vmware.com/security/advisories/VMSA-2024-0006.html) ESXi Use-after-free and Out-of-bounds write vulnerability | March 2024 | Microsoft has confirmed the applicability of the vulnerabilities and is rolling out the provided VMware updates. | March 2024 - Resolved in [vCenter Server 7.0 U3o & ESXi 7.0 U3o](architecture-private-clouds.md#vmware-software-versions) | | When I run the VMware HCX Service Mesh Diagnostic wizard, all diagnostic tests will be passed (green check mark), yet failed probes will be reported. See [HCX - Service Mesh diagnostics test returns 2 failed probes](https://knowledge.broadcom.com/external/article?legacyId=96708) | 2024 | None, this will be fixed in 4.9+. | N/A | | [VMSA-2024-0011](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24308) Out-of-bounds read/write vulnerability (CVE-2024-22273) | June 2024 | Microsoft has confirmed the applicability of the CVE-2024-22273 vulnerability and it will be addressed in the upcoming 8.0u2b Update. | July 2024 | | [VMSA-2024-0012](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24453) Multiple Vulnerabilities in the DCERPC Protocol and Local Privilege Escalations | June 2024 | Microsoft, working with Broadcom, adjudicated the risk of these vulnerabilities at an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. A plan is being put in place to address these vulnerabilities at a future date TBD. | N/A |
+| Zerto DR is not currently supported with the AV64 SKU. The AV64 SKU uses ESXi host secure boot and Zerto DR has not implemented a signed VIB for the ESXi install. | 2024 | Continue using the AV36, AV36P, and AV52 SKUs for Zerto DR. | N/A |
In this article, you learned about the current known issues with the Azure VMware Solution.
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
# Deploy Zerto disaster recovery on Azure VMware Solution
+> [!IMPORTANT]
+> **Temporary pause on new onboarding for Zerto on Azure VMware Solution**
+>
+> Due to ongoing security enhancements and ongoing development work on the Linux version for Azure VMware Solution Run Command and migration activities, we are currently not onboarding new customers for Zerto on Azure VMware Solution. These efforts include transitioning to Linux-based run command, meeting the security requirements to operate the Zerto Linux appliance, and migrating existing customers to latest Zerto version. This pause will be in effect until August 6, 2024.
+>
+>Please Note: Existing customers will continue to receive full support as usual. For further information regarding the timeline and future onboarding availability, please reach out to your Zerto account team.
+>
+>Thank you for your understanding and cooperation.
++
+> [!IMPORTANT]
+> AV64 node type does not support Zerto Disaster Recovery at the moment. You can contact your Zerto account team to get more information and an estimate of when this will be available.
++ In this article, learn how to implement disaster recovery for on-premises VMware or Azure VMware Solution-based virtual machines (VMs). The solution in this article uses [Zerto disaster recovery](https://www.zerto.com/solutions/use-cases/disaster-recovery/). Instances of Zerto are deployed at both the protected and the recovery sites. Zerto is a disaster recovery solution designed to minimize downtime of VMs should a disaster occur. Zerto's platform is built on the foundation of Continuous Data Protection (CDP) that enables minimal or close to no data loss. The platform provides the level of protection wanted for many business-critical and mission-critical enterprise applications. Zerto also automates and orchestrates failover and failback to ensure minimal downtime in a disaster. Overall, Zerto simplifies management through automation and ensures fast and highly predictable recovery times.
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
Ensure you provide the remote user the VMware VRM administrator and VMware SRM a
> [!NOTE]
-> The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.5.0.3.
-
+> The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.7.0.3.
1. From the **Disaster Recovery Solution** drop-down, select **VMware Site Recovery Manager (SRM) ΓÇô vSphere Replication**. :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png" alt-text="Screenshot showing the Disaster recovery tab under Add-ons with VMware Site Recovery Manager (SRM) - vSphere replication selected." border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png":::
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-create-instance.md
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
## Create a resource using Bicep template ## Review the Bicep file
azure-web-pubsub Quickstart Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-bicep-template.md
This quickstart describes how to use Bicep to create an Azure Web PubSub service using Azure CLI or PowerShell. ## Prerequisites
azure-web-pubsub Quickstart Cli Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-cli-create.md
ms.devlang: azurecli
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. This quickstart shows you the options to create Azure Web PubSub instance with the Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-web-pubsub Quickstart Cli Try https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-cli-try.md
ms.devlang: azurecli
This quickstart shows you how to connect to the Azure Web PubSub instance and publish messages to the connected clients using the [Azure CLI](/cli/azure). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-web-pubsub Quickstart Live Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-live-demo.md
Last updated 11/08/2021
This quickstart shows you how to get started easily with a [Pub/Sub live demo](https://aka.ms/awps/quicktry). [!INCLUDE [create-instance-portal](includes/create-instance-portal.md)]
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
In this tutorial, you learn how to:
[!INCLUDE [create-instance-portal](includes/create-instance-portal.md)]
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-build-chat.md
In this tutorial, you learn how to:
> * Configure event handler settings for Azure Web PubSub > * Hanlde events in the app server and build a real-time chat app [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
In this tutorial, you learn how to:
* The [Azure CLI](/cli/azure) to manage Azure resources. ## Create an IoT hub
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
In this tutorial, you learn how to:
[!INCLUDE [create-instance-portal](includes/create-instance-portal.md)]
azure-web-pubsub Tutorial Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-subprotocol.md
In this tutorial, you learn how to:
> * Generate the full URL to establish the WebSocket connection > * Publish messages between WebSocket clients using subprotocol [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
backup Azure Kubernetes Service Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md
Backup in AKS has two types of hooks:
- Backup hooks - Restore hooks
-## Modify resource while restoring backups to AKS cluster
-
-You can use the *Resource Modification* feature to modify backed-up Kubernetes resources during restore by specifying *JSON* patches as `configmap` deployed in the AKS cluster.
-
-### Create and apply a resource modifier configmap during restore
-
-To create and apply resource modification, follow these steps:
-
-1. Create resource modifiers configmap.
-
- You need to create one configmap in your preferred namespace from a *YAML* file that defined resource modifiers.
-
- **Example for creating command**:
-
- ```json
- version: v1
- resourceModifierRules:
- - conditions:
- groupResource: persistentvolumeclaims
- resourceNameRegex: "^mysql.*$"
- namespaces:
- - bar
- - foo
- labelSelector:
- matchLabels:
- foo: bar
- patches:
- - operation: replace
- path: "/spec/storageClassName"
- value: "premium"
- - operation: remove
- path: "/metadata/labels/test"
-
- ```
-
- - The above *configmap* applies the *JSON* patch to all the Persistent Volume Copies in the *namespaces* bar and *foo* with name that starts with `mysql` and `match label foo: bar`. The JSON patch replaces the `storageClassName` with `premium` and removes the label `test` from the Persistent Volume Copies.
- - Here, the *Namespace* is the original namespace of the backed-up resource, and not the new namespace where the resource is going to be restored.
- - You can specify multiple JSON patches for a particular resource. The patches are applied as per the order specified in the *configmap*. A subsequent patch is applied in order. If multiple patches are specified for the same path, the last patch overrides the previous patches.
- - You can specify multiple `resourceModifierRules` in the *configmap*. The rules are applied as per the order specified in the *configmap*.
--
-2. Creating a resource modifier reference in the restore configuration
-
- When you perform a restore operation, provide the *ConfigMap name* and the *Namespace* where it's deployed as part of restore configuration. These details need to be provided under **Resource Modifier Rules**.
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/resource-modifier-rules.png" alt-text="Screenshot shows the location to provide resource details." lightbox="./media/azure-kubernetes-service-backup-overview/resource-modifier-rules.png":::
--
- Operations supported by **Resource Modifier**
-
- - **Add**
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/add-resource-modifier.png" alt-text="Screenshot shows the addition of resource modifier. ":::
-
- - **Remove**
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/remove-resource-modifier.png" alt-text="Screenshot shows the option to remove resource.":::
-
- - **Replace**
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/replace-resource-modifier.png" alt-text="Screenshot shows the replacement option for resource modifier.":::
-
- - **Move**
- - **Copy**
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/copy-resource-modifier.png" alt-text="Screenshot shows the option to copy resource modifier.":::
-
- - **Test**
-
- You can use the **Test** operation to check if a particular value is present in the resource. If the value is present, the patch is applied. If the value isn't present, the patch isn't applied.
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/test-resource-modifier-value-present.png" alt-text="Screenshot shows the option to test if the resource value modifier is present.":::
-
-### JSON patch
-
-This *configmap* applies the JSON patch to all the deployments in the namespaces by default and `nginx` with the name that starts with `nginxdep`. The JSON patch updates the replica count to *12* for all such deployments.
--
-```json
-resourceModifierRules:
-- conditions:
-groupResource: deployments.apps
-resourceNameRegex: "^nginxdep.*$"
-namespaces:
-- default-- nginx
-patches:
-- operation: replace
-path: "/spec/replicas"
-value: "12"
-
-```
--- **JSON Merge patch**: This config map will apply the JSON Merge Patch to all the deployments in the namespaces default and nginx with the name starting with nginxdep. The JSON Merge Patch will add/update the label "app" with the value "nginx1".-
-```json
--
-version: v1
-resourceModifierRules:
- - conditions:
- groupResource: deployments.apps
- resourceNameRegex: "^nginxdep.*$"
- namespaces:
- - default
- - nginx
- mergePatches:
- - patchData: |
- {
- "metadata" : {
- "labels" : {
- "app" : "nginx1"
- }
- }
- }
--
-```
--- **Strategic Merge patch**: This config map will apply the Strategic Merge Patch to all the pods in the namespace default with the name starting with nginx. The Strategic Merge Patch will update the image of container nginx to mcr.microsoft.com/cbl-mariner/base/nginx:1.22-
-```json
-
-version: v1
-resourceModifierRules:
-- conditions:
- groupResource: pods
- resourceNameRegex: "^nginx.*$"
- namespaces:
- - default
- strategicPatches:
- - patchData: |
- {
- "spec": {
- "containers": [
- {
- "name": "nginx",
- "image": "mcr.microsoft.com/cbl-mariner/base/nginx:1.22"
- }
- ]
- }
- }
-
-```
- ### Backup hooks In a backup hook, you can configure the commands to run the hook before any custom action processing (pre-hooks), or after all custom actions are finished and any additional items specified by custom actions are backed up (post-hooks).
spec:
Learn [how to use hooks during AKS backup](azure-kubernetes-service-cluster-backup.md#use-hooks-during-aks-backup).
+ > [!NOTE]
+ > - During restore, backup extension waits for container to come up and then executes exec commands on them, defined in the restore hooks.
+ > - In case you are performing restore to the same namespace that was backed up, the restore hooks will not be executed as it only looks for new container that gets spawned. This is regardless of whether skip or patch policy is opted.
+++
+## Modify resource while restoring backups to AKS cluster
+
+You can use the *Resource Modification* feature to modify backed-up Kubernetes resources during restore by specifying *JSON* patches as `configmap` deployed in the AKS cluster.
+
+### Create and apply a resource modifier configmap during restore
+
+To create and apply resource modification, follow these steps:
+
+1. Create resource modifiers configmap.
+
+ You need to create one configmap in your preferred namespace from a *YAML* file that defined resource modifiers.
+
+ **Example for creating command**:
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: persistentvolumeclaims
+ resourceNameRegex: "^mysql.*$"
+ namespaces:
+ - bar
+ - foo
+ labelSelector:
+ matchLabels:
+ foo: bar
+ patches:
+ - operation: replace
+ path: "/spec/storageClassName"
+ value: "premium"
+ - operation: remove
+ path: "/metadata/labels/test"
+ ```
+
+ - The above *configmap* applies the *JSON* patch to all the Persistent Volume Copies in the *namespaces* bar and *foo* with name that starts with `mysql` and `match label foo: bar`. The JSON patch replaces the `storageClassName` with `premium` and removes the label `test` from the Persistent Volume Copies.
+ - Here, the *Namespace* is the original namespace of the backed-up resource, and not the new namespace where the resource is going to be restored.
+ - You can specify multiple JSON patches for a particular resource. The patches are applied as per the order specified in the *configmap*. A subsequent patch is applied in order. If multiple patches are specified for the same path, the last patch overrides the previous patches.
+ - You can specify multiple `resourceModifierRules` in the *configmap*. The rules are applied as per the order specified in the *configmap*.
++
+2. Creating a resource modifier reference in the restore configuration
+
+ When you perform a restore operation, provide the *ConfigMap name* and the *Namespace* where it's deployed as part of restore configuration. These details need to be provided under **Resource Modifier Rules**.
+
+ :::image type="content" source="./media/azure-kubernetes-service-backup-overview/resource-modifier-rules.png" alt-text="Screenshot shows the location to provide resource details." lightbox="./media/azure-kubernetes-service-backup-overview/resource-modifier-rules.png":::
++
+ ### Operations supported by **Resource Modifier**
+
+- **Add**
+
+ You can use the **Add** operation to add a new block to the resource json. In the example below, the operation add a new container details to the spec with a deployment.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: deployments.apps
+ resourceNameRegex: "^test-.*$"
+ namespaces:
+ - bar
+ - foo
+ patches:
+ # Dealing with complex values by escaping the yaml
+ - operation: add
+ path: "/spec/template/spec/containers/0"
+ value: "{\"name\": \"nginx\", \"image\": \"nginx:1.14.2\", \"ports\": [{\"containerPort\": 80}]}"
+ ```
+
+
+- **Remove**
+
+ You can use the **Remove** operation to remove a key from the resource json. In the example below, the operation removes the label with test as key.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: persistentvolumeclaims
+ resourceNameRegex: "^mysql.*$"
+ namespaces:
+ - bar
+ - foo
+ labelSelector:
+ matchLabels:
+ foo: bar
+ patches:
+ - operation: remove
+ path: "/metadata/labels/test"
+ ```
+
+- **Replace**
+
+ You can use the **Replace** operation to replace a value for the path mentioned to an alternate one. In the example below, the operation replaces the storageClassName in the persistent volume claim with premium.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: persistentvolumeclaims
+ resourceNameRegex: "^mysql.*$"
+ namespaces:
+ - bar
+ - foo
+ labelSelector:
+ matchLabels:
+ foo: bar
+ patches:
+ - operation: replace
+ path: "/spec/storageClassName"
+ value: "premium"
+ ```
+
+- **Copy**
+
+ You can use the **Copy** operation to copy a value from one path from the resources defined to another path.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: deployments.apps
+ resourceNameRegex: "^test-.*$"
+ namespaces:
+ - bar
+ - foo
+ patches:
+ - operation: copy
+ from: "/spec/template/spec/containers/0"
+ path: "/spec/template/spec/containers/1"
+ ```
+
+- **Test**
+
+ You can use the **Test** operation to check if a particular value is present in the resource. If the value is present, the patch is applied. If the value isn't present, the patch isn't applied. In the example below, the operation checks whether the persistent volume claims have premium as StorageClassName and replaces if with standard, if true.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: persistentvolumeclaims
+ resourceNameRegex: ".*"
+ namespaces:
+ - bar
+ - foo
+ patches:
+ - operation: test
+ path: "/spec/storageClassName"
+ value: "premium"
+ - operation: replace
+ path: "/spec/storageClassName"
+ value: "standard"
+ ```
+
+- **JSON Patch**
+
+ This *configmap* applies the JSON patch to all the deployments in the namespaces by default and ``nginx` with the name that starts with `nginxdep`. The JSON patch updates the replica count to *12* for all such deployments.
+
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: deployments.apps
+ resourceNameRegex: "^nginxdep.*$"
+ namespaces:
+ - default
+ - nginx
+ patches:
+ - operation: replace
+ path: "/spec/replicas"
+ value: "12"
+ ```
+
+- **JSON Merge Patch**
+
+ This config map will apply the JSON Merge Patch to all the deployments in the namespaces default and nginx with the name starting with nginxdep. The JSON Merge Patch will add/update the label "app" with the value "nginx1".
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: deployments.apps
+ resourceNameRegex: "^nginxdep.*$"
+ namespaces:
+ - default
+ - nginx
+ mergePatches:
+ - patchData: |
+ {
+ "metadata" : {
+ "labels" : {
+ "app" : "nginx1"
+ }
+ }
+ }
+ ```
+
+- **Strategic Merge Patch**
+
+ This config map will apply the Strategic Merge Patch to all the pods in the namespace default with the name starting with nginx. The Strategic Merge Patch will update the image of container nginx to mcr.microsoft.com/cbl-mariner/base/nginx:1.22
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: pods
+ resourceNameRegex: "^nginx.*$"
+ namespaces:
+ - default
+ strategicPatches:
+ - patchData: |
+ {
+ "spec": {
+ "containers": [
+ {
+ "name": "nginx",
+ "image": "mcr.microsoft.com/cbl-mariner/base/nginx:1.22"
+ }
+ ]
+ }
+ }
+ ```
+ ## Which backup storage tier does AKS backup support? Azure Backup for AKS supports two storage tiers as backup datastores:
backup Azure Kubernetes Service Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-troubleshoot.md
These error codes appear due to issues on the Backup Extension installed in the
**Recommended action**: The health of the extension is required to be verified via running the command `kubectl get pods -n dataprotection.microsoft`. If the pods aren't in running state, then increase the number of nodes in the cluster by *1* or increase the compute limits. Then wait for a few minutes and run the command again, which should change the state of the pods to *running*. If the issue persists, delete and reinstall the extension.
+### BackupPluginPodRestartedDuringBackupError
+
+**Cause**: Backup Extension Pod (dataprotection-microsoft-kubernetes-agent) in your AKS cluster experiencing instability due to insufficient CPU/Memory resources on its current node, leading to OOM (Out of Memory) kill incidents. This could be because of lower compute requested by the backup extension pod.
+
+**Recommended action**: To address this, we recommend increasing the compute values allocated to this pod. By doing so, it will be automatically provisioned on a different node within your AKS cluster with ample compute resources available.
+
+The current value of compute for this pod is:
+
+resources.requests.cpu is 500m
+resources.requests.memory is 128Mi
+Kindly modify the memory allocation to 512Mi by updating the 'resources.requests.memory' parameter. If the issue persists, it is advisable to increase the 'resources.requests.cpu' parameter to 900m, post the memory allocation. You can increase the values for the parameters by following below steps:
+
+1. Navigate to the AKS cluster blade in the Azure portal.
+2. Click on "Extensions+Applications" and select "azure-aks-backup" extension.
+3. Update the configuration settings in the portal by adding the following key-value pair.
+ resources.requests.cpu 900m
+ resources.requests.memory 512Mi
+ ### BackupPluginDeleteBackupOperationFailed **Cause**: The Backup extension should be running to delete the backups.
These error codes appear due to issues based on the Backup extension installed i
**Recommended action**: The error appears if the Extension Identity doesn't have right permissions to access the storage account. This error appears if AKS backup extension is installed the first time when configuring protection operation. This happens for the time taken for the granted permissions to propagate to the AKS backup extension. As a workaround, wait an hour and retry the protection configuration. Otherwise, use Azure portal or CLI to reassign this missing permission on the storage account.
+### UserErrorSnapshotResourceGroupHasLocks
+
+**Cause**: This error code appears when a Delete or Read Lock has been applied on the Snapshot Resource Group provided as input for Backup Extension.
+
+**Recommended action**: In case if you are configuring a new backup instance, use a resource group without a delete or read lock. If the backup instance already configured then remove the lock from the snapshot resource group.
+ ## Vaulted backup based errors
-This error code can appear while you enable AKS backup to store backups in a vault standard datastore.
+These error codes can appear while you enable AKS backup to store backups in a vault standard datastore.
### DppUserErrorVaultTierPolicyNotSupported
backup Azure Kubernetes Service Cluster Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md
To configure backups for AKS cluster:
### Backup configurations
-As part of the AKS backup capability, you can back up all cluster resources or specific cluster resources. You can use the filters that are available for backup configuration to choose the resources to back up. The defined backup configurations are referenced by the values for **Backup Instance Name**. You can use the following options to choose the **Namespaces** values to back up:
+Azure Backup for AKS allows you to define the application boundary within AKS cluster that you want to back up. You can use the filters that are available within backup configurations to choose the resources to back up and also to run custom hooks. The defined backup configuration is referenced by the value for **Backup Instance Name**. The below filters are available to define your application boundary:
-- **All (including future Namespaces)**: This backs up all current and future values for **Namespaces** when the underlying cluster resources are backed up.-- **Choose from list**: Select the specific values for **Namespaces** in the AKS cluster to back up.
+1. **Select Namespaces to backup**, you can either select **All** to back up all existing and future namespaces in the cluster, or you can select **Choose from list** to select specific namespaces for backup.
- To select specific cluster resources to back up, you can use labels that are attached to the resources to include the resources in the backup. Only the resources that have the labels that you enter are backed up. You can use multiple labels.
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/backup-instance-name.png" alt-text="Screenshot that shows how to select namespaces to include in the backup." lightbox="./media/azure-kubernetes-service-cluster-backup/backup-instance-name.png":::
+
+2. Expand **Additional Resource Settings** to see filters that you can use to choose cluster resources to back up. You can choose to back up resources based on the following categories:
+
+ - **Labels**: You can filter AKS resources by using [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) that you assign to types of resources. Enter labels in the form of key/value pairs. Combine multiple labels by using `AND` logic.
+
+ For example, if you enter the labels `env=prod;tier!=web`, the process selects resources that have a label with the `env` key and the `prod` value, and a label with the `tier` key for which the value isn't `web`.
+
+ - **API groups**: You can also include resources by providing the AKS API group and kind. For example, you can choose for backup AKS resources like Deployments. You can access the list of Kubernetes defined API Groups [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.30/).
+
+ - **Other options**: You can enable or disable backup for cluster-scoped resources, persistent volumes, and secrets. By default, cluster-scoped resources and persistent volumes are enabled
+
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/cluster-scope-resources.png" alt-text="Screenshot that shows the Additional Resource Settings pane." lightbox="./media/azure-kubernetes-service-cluster-backup/cluster-scope-resources.png":::
+
+ > [!NOTE]
+ > All these resource settings are combined and applied via `AND` logic.
> [!NOTE] > You should add the labels to every single YAML file that is deployed and to be backed up. This includes namespace-scoped resources like persistent volume claims, and cluster-scoped resources like persistent volumes.
- If you also want to back up cluster-scoped resources, secrets, and persistent volumes, select the items under **Other Options**.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/various-backup-configurations.png" alt-text="Screenshot that shows various backup configurations."::: ## Use hooks during AKS backup
backup Backup Azure Afs Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-afs-automation.md
This article explains how to:
## Set up PowerShell > [!NOTE] > Azure PowerShell currently doesn't support backup policies with hourly schedule. Please use Azure Portal to leverage this feature. [Learn more](manage-afs-backup.md#create-a-new-policy)
backup Backup Azure Restore Key Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-key-secret.md
This article talks about using Azure VM Backup to perform restore of encrypted Azure VMs, if your key and secret don't exist in the key vault. These steps can also be used if you want to maintain a separate copy of the key (Key Encryption Key) and secret (BitLocker Encryption Key) for the restored VM. ## Prerequisites
backup Backup Azure Troubleshoot Slow Backup Performance Issue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-slow-backup-performance-issue.md
updates to the Backup agent to fix various issues, add features, and improve per
We also strongly recommend that you review the [Azure Backup service FAQ](backup-azure-backup-faq.yml) to make sure you're not experiencing any of the common configuration issues. ## Cause: Backup job running in unoptimized mode
backup Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md
This article provides troubleshooting steps that can help you resolve Azure Backup errors related to communication with the VM agent and extension. ## Step-by-step guide to troubleshoot backup failures
backup Backup Azure Vms Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-automation.md
Review the **Az.RecoveryServices** [cmdlet reference](/powershell/module/az.reco
## Set up and register To begin:
backup Backup Client Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-client-automation.md
This article shows you how to use PowerShell to set up Azure Backup on Windows S
## Install Azure PowerShell To get started, [install the latest PowerShell release](/powershell/azure/install-azure-powershell).
backup Backup Dpm Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-dpm-automation.md
Sample DPM scripts: Get-DPMSampleScript
## Setup and Registration To begin, [download the latest Azure PowerShell](/powershell/azure/install-azure-powershell).
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Other support matrices are available:
- Support matrix for backup by using [System Center Data Protection Manager (DPM)/Microsoft Azure Backup Server (MABS)](backup-support-matrix-mabs-dpm.md) - Support matrix for backup by using the [Microsoft Azure Recovery Services (MARS) agent](backup-support-matrix-mars-agent.md) ## Vault support
backup Disk Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-troubleshoot.md
Recommended Action: Create the resource group and provide the required permissio
Error Message: Could not perform the operation as Managed Disk no longer exists.
-Recommended Action: The backups will continue to fail as the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it's deleted by mistake. If the disk is moved to a different location, configure backup for the disk.
+Recommended Action: The backups are failing because the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it is deleted by mistake. If the disk is moved to a different location, configure backup for the disk.
+
+### UserErrorSnapshotResourceGroupHasLocks
+
+Error Message: This error code appears when a Delete or Read Lock has been applied on the Snapshot Resource Group provided as input for Backup Extension.
+
+Recommended Action: In case if you are configuring a new backup instance, use a resource group without a delete or read lock. If the backup instance already configured then remove the lock from the snapshot resource group.
### Error Code: UserErrorNotEnoughPermissionOnDisk Error Message: Azure Backup Service requires additional permissions on the Disk to do this operation.
-Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup Vault managed identity and how to provide it.
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are needed to be assigned to the Backup Vault managed identity and how to provide it.
### Error Code: UserErrorNotEnoughPermissionOnSnapshotRG Error Message: Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
-Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand which is the resource group, what permissions are required by the Backup Vault managed identity and how to provide it.
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup Vault managed identity over the resource group and how to provide them.
### Error Code: UserErrorDiskBackupDiskOrMSIPermissionsNotPresent Error Message: Invalid disk or Azure Backup Service requires additional permissions on the Disk to do this operation
-Recommended Action: The backups will continue to fail as the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it's deleted by mistake. If the disk is moved to a different location, configure backup for the disk. If the disk isn't deleted or moved, grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity and how to provide it.
+Recommended Action: The backups are failing as the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it deleted by mistake. If the disk is moved to a different location, configure backup for the disk. If the disk isn't deleted or moved, grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity.
### Error Code: UserErrorDiskBackupSnapshotRGOrMSIPermissionsNotPresent Error Message: Could not perform the operation as Snapshot Data store Resource Group no longer exists. Or Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
-Recommended Action: Create a resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what is the resource group, what permissions are required by the Backup vault's managed identity and how to provide it.
+Recommended Action: Create a resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity over the resource group.
### Error Code: UserErrorDiskBackupAuthorizationFailed Error Message: Backup Vault managed identity is missing the necessary permissions to do this operation.
-Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk to be backed up and on the snapshot data store resource group where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity and how to provide it.
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk to be backed up and on the snapshot data store resource group where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity.
### Error Code: UserErrorSnapshotRGOrMSIPermissionsNotPresent Error Message: Could not perform the operation as Snapshot Data store Resource Group no longer exists. Or, Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
-Recommended Action: Create the resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what is the resource group, what permissions are required by the Backup vault's managed identity, and how to provide it.
+Recommended Action: Create the resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity over resource group.
### Error Code: UserErrorOperationalStoreParametersNotProvided
Recommended Action: Provide a valid resource group to restore. For more informat
Error Message: Azure Backup Service requires additional permissions on the Target Resource Group to do this operation.
-Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity, and how to provide it.
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity.
### Error Code: UserErrorSubscriptionDiskQuotaLimitReached
-Error Message: Operation has failed as the Disk quota maximum limit has been reached on the subscription.
+Error Message: Operation is failed as the maximum limit for disk quota is reached for the subscription.
Recommended Action: Refer to the [Azure subscription and service limits and quota documentation](../azure-resource-manager/management/azure-subscription-service-limits.md) or contact Microsoft Support for further guidance. ### Error Code: UserErrorDiskBackupRestoreRGOrMSIPermissionsNotPresent
-Error Message: Operation failed as the Target Resource Group does not exist. Or Azure Backup Service requires additional permissions on the Target Resource Group to do this operation.
+Error Message: Operation failed as the Target Resource Group doesn't exist. Or Azure Backup Service requires additional permissions on the Target Resource Group to do this operation.
-Recommended Action: Provide a valid resource group to restore, and grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity, and how to provide it.
+Recommended Action: Provide a valid resource group to restore, and grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are required to be assigned to the Backup vault's managed identity.
### Error Code: UserErrorDESKeyVaultKeyDisabled
Recommended Action: Ensure that the key vault key used for disk encryption set i
### Error Code: UserErrorDiskSnapshotNotFound
-Error Message: The disk snapshot for this Restore point has been deleted.
+Error Message: The disk snapshot for this Restore point is not accessible.
-Recommended Action: Snapshots are stored in the snapshot data store resource group within your subscription. It's possible that the snapshot related to the selected restore point might have been deleted or moved from this resource group. Consider using another Recovery point to restore. Also, follow the recommended guidelines for choosing Snapshot resource group mentioned in the [restore documentation](restore-managed-disks.md).
+Recommended Action: Snapshots are stored in the snapshot data store resource group within your subscription. The snapshot related to the selected restore point is either deleted or moved from this resource group. Consider using another Recovery point to restore. Also, follow the recommended guidelines for choosing Snapshot resource group mentioned in the [restore documentation](restore-managed-disks.md).
### Error Code: UserErrorSnapshotMetadataNotFound
-Error Message: The disk snapshot metadata for this Restore point has been deleted
+Error Message: The disk snapshot metadata for this Restore point is deleted
Recommended Action: Consider using another recovery point to restore. For more information, see the [restore documentation](restore-managed-disks.md).
Recommended Action: Consider using another recovery point to restore. For more i
Error Message: Disk Backup is not yet available in the region of the Backup Vault under which Configure Protection is being tried.
-Recommended Action: Backup Vault must be in a supported region. For region availability see the [the support matrix](disk-backup-support-matrix.md).
+Recommended Action: Backup Vault must be in a supported region. For region availability, see the [the support matrix](disk-backup-support-matrix.md).
### Error Code: UserErrorDppDatasourceAlreadyHasBackupInstance
-Error Message: The disk you are trying to configure backup is already being protected. Disk is already associated with a backup instance in a Backup vault.
+Error Message: The disk you're trying to configure backup is already being protected. Disk is already associated with a backup instance in a Backup vault.
-Recommended Action: This disk is already associated with a backup instance in a Backup vault. If you want to re-protect this disk, then delete the backup instance from the Backup vault where it's currently protected and re-protect the disk in any other vault.
+Recommended Action: This disk is already associated with a backup instance in a Backup vault. If you want to reprotect this disk, then delete the backup instance from the Backup vault where it was protected and reprotect the disk in any other vault.
### Error Code: UserErrorDppDatasourceAlreadyProtected
-Error Message: The disk you are trying to configure backup is already being protected. Disk is already associated with a backup instance in a Backup vault.
+Error Message: The disk you're trying to configure backup is already being protected. Disk is already associated with a backup instance in a Backup vault.
-Recommended Action: This disk is already associated with a backup instance in a Backup vault. If you want to re-protect this disk, then delete the backup instance from the Backup vault where it is currently protected and re-protect the disk in any other vault.
+Recommended Action: This disk is already associated with a backup instance in a Backup vault. If you want to reprotect this disk, then delete the backup instance from the Backup vault where it is currently protected and reprotect the disk in any other vault.
### Error Code: UserErrorMaxConcurrentOperationLimitReached
-Error Message: Unable to start the operation as maximum number of allowed concurrent backups has reached.
+Error Message: Unable to start the operation as maximum number of allowed concurrent backups is reached.
Recommended Action: Wait until the previous running backup completes. ### Error Code: UserErrorMissingSubscriptionRegistration
-Error Message: The subscription is not registered to use namespace ΓÇÿMicrosoft.ComputeΓÇÖ.
+Error Message: The subscription isn't registered to use namespace Microsoft.Compute
-Recommended Action: The required resource provider hasn't been registered for your subscription. Register both the resource providers' namespace (_Microsoft.Compute_ and _Microsoft.Storage_) using the steps in [Solution 3](../azure-resource-manager/templates/error-register-resource-provider.md#solution-3azure-portal).
+Recommended Action: The required resource provider is not registered for your subscription. Register both the resource providers' namespace (_Microsoft.Compute_ and _Microsoft.Storage_) using the steps in [Solution 3](../azure-resource-manager/templates/error-register-resource-provider.md#solution-3azure-portal).
## Next steps
-[Azure Disk Backup support matrix](disk-backup-support-matrix.md)
+[Azure Disk Backup support matrix](disk-backup-support-matrix.md)
backup Quick Backup Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-powershell.md
This quickstart enables backup on an existing Azure VM. If you need to create a
This quickstart requires the Azure PowerShell AZ module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). ## Sign in and register
backup Quick Backup Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-template.md
[Azure Backup](backup-overview.md) backs up on-premises machines and apps, and Azure VMs. This article shows you how to back up an Azure VM with an Azure Resource Manager template (ARM template) and Azure PowerShell. This quickstart focuses on the process of deploying an ARM template to create a Recovery Services vault. For more information on developing ARM templates, see the [Azure Resource Manager documentation](../azure-resource-manager/index.yml) and the [template reference](/azure/templates/microsoft.recoveryservices/allversions). A [Recovery Services vault](backup-azure-recovery-services-vault-overview.md) is a logical container that stores backup data for protected resources, such as Azure VMs. When a backup job runs, it creates a recovery point inside the Recovery Services vault. You can then use one of these recovery points to restore data to a given point in time. Alternatively, you can back up a VM using [Azure PowerShell](./quick-backup-vm-powershell.md), the [Azure CLI](quick-backup-vm-cli.md), or in the [Azure portal](quick-backup-vm-portal.md).
backup Backup Powershell Sample Backup Encrypted Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-sample-backup-encrypted-vm.md
This script creates a Recovery Services vault with geo-redundant storage (GRS) for an encrypted Azure virtual machine. The default protection policy is applied to the vault. The policy generates a daily backup for the virtual machine, and retains each backup for 365 days. The script also triggers the initial recovery point for the virtual machine and retains that recovery point for 30 days. ## Sample script [!code-powershell[main](../../../powershell_scripts/backup/backup-encrypted-vm/backup-encrypted-vm.ps1 "Back up encrypted virtual machine")]
backup Tutorial Backup Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-azure-vm.md
# Back up Azure VMs with PowerShell This tutorial describes how to deploy an [Azure Backup](backup-overview.md) Recovery Services vault to back up multiple Azure VMs using PowerShell.
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
You can use the following example values when creating this configuration, or yo
This section helps you create a virtual network, subnets, and deploy Azure Bastion using Azure PowerShell. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
> 1. Create a resource group, a virtual network, and a front end subnet to which you deploy the VMs that you'll connect to via Bastion. If you're running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command.
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
No, Bastion connectivity to Azure Virtual Desktop isn't supported.
Review any error messages and [raise a support request in the Azure portal](../azure-portal/supportability/how-to-create-azure-support-request.md) as needed. Deployment failures can result from [Azure subscription limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). Specifically, customers might encounter a limit on the number of public IP addresses allowed per subscription that causes the Azure Bastion deployment to fail.
-### <a name="dr"></a>How do I incorporate Azure Bastion in my Disaster Recovery plan?
-
-Azure Bastion is deployed within virtual networks or peered virtual networks, and is associated to an Azure region. You're responsible for deploying Azure Bastion to a Disaster Recovery (DR) site virtual network. If there's an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
- ### <a name="move-virtual-network"></a>Does Bastion support moving a VNet to another resource group? No. If you move your virtual network to another resource group (even if it's in the same subscription), you'll need to first delete Bastion from virtual network, and then proceed to move the virtual network to the new resource group. Once the virtual network is in the new resource group, you can deploy Bastion to the virtual network.
-### <a name="zone-redundant"></a>Does Bastion support zone redundancies?
-Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions might or might not be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies.
### <a name="azure-ad-guests"></a>Does Bastion support Microsoft Entra guest accounts?
bastion Create Host Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/create-host-cli.md
Verify that you have an Azure subscription. If you don't already have an Azure s
This section helps you deploy Azure Bastion using Azure CLI. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
> 1. If you don't already have a virtual network, create a resource group and a virtual network using [az group create](/cli/azure/group#az-group-create) and [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create).
bastion Native Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/native-client.md
This article helps you configure your Bastion deployment to accept connections f
You can configure this feature by modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified. Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client. >[!NOTE]
->[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+>[!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
## Deploy Bastion with the native client feature
bastion Private Only Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/private-only-deployment.md
You can use the following example values when creating this configuration, or yo
This section helps you deploy Bastion as private-only to your virtual network. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
1. Sign in to the [Azure portal](https://portal.azure.com) and go to your virtual network. If you don't already have one, you can [create a virtual network](../virtual-network/quick-create-portal.md). If you're creating a virtual network for this exercise, you can create the AzureBastionSubnet (from the next step) at the same time you create your virtual network.
bastion Quickstart Host Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-arm-template.md
By default, this template creates a Bastion deployment with a resource group, a
## Deploy the template > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
In this section, you deploy Bastion by using the Azure portal. You don't connect and sign in to your virtual machine or deploy Bastion directly from your VM.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
The steps in this article help you:
* Remove your VM's public IP address if you don't need it for anything else. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
## <a name="prereq"></a>Prerequisites
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
You can use the following example values when creating this configuration, or yo
This section helps you deploy Bastion to your virtual network. After Bastion is deployed, you can connect securely to any VM in the virtual network using its private IP address. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
1. Sign in to the [Azure portal](https://portal.azure.com).
bastion Upgrade Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/upgrade-sku.md
This article helps you view and upgrade your Bastion SKU. Once you upgrade, you can't revert back to a lower SKU without deleting and reconfiguring Bastion. For more information about features and SKUs, see [Configuration settings](configuration-settings.md). ## View a SKU
batch Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/accounts.md
Title: Batch accounts and Azure Storage accounts description: Learn about Azure Batch accounts and how they're used from a development standpoint. Previously updated : 04/04/2024 Last updated : 06/25/2024 # Batch accounts and Azure Storage accounts
An Azure Batch account is a uniquely identified entity within the Batch service.
## Batch accounts
-All processing and resources are associated with a Batch account. When your application makes a request against the Batch service, it authenticates the request using the Azure Batch account name and the account URL. Additionally, it can use either an access key or a Microsoft Entra token.
+All processing and resources such as tasks, job and batch pool are associated with a Batch account. When your application makes a request against the Batch service, it authenticates the request using the Azure Batch account name and the account URL. Additionally, it can use either an access key or a Microsoft Entra token.
You can run multiple Batch workloads in a single Batch account. You can also distribute your workloads among Batch accounts that are in the same subscription but located in different Azure regions.
batch Batch Aad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth.md
Title: Authenticate Azure Batch services with Microsoft Entra ID description: Learn how to authenticate Azure Batch service applications with Microsoft Entra ID by using integrated authentication or a service principal. Previously updated : 04/03/2023 Last updated : 06/25/2024
Azure Batch supports authentication with [Microsoft Entra ID](/azure/active-dire
This article describes two ways to use Microsoft Entra authentication with Azure Batch: -- **Integrated authentication** authenticates a user who's interacting with an application. The application gathers a user's credentials and uses those credentials to authorize access to Batch resources.
+- **Integrated authentication** authenticates a user who's interacting with an application. The application gathers a user's credentials and uses those credentials to authenticate access to Batch resources.
- A **service principal** authenticates an unattended application. The service principal defines the policy and permissions for the application and represents the application to access Batch resources at runtime.
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-apis-tools.md
Title: APIs and tools for developers description: Learn about the APIs and tools available for developing solutions with the Azure Batch service. Previously updated : 06/13/2024 Last updated : 06/26/2024
For example, the [Batch service API to delete a pool](/rest/api/batchservice/poo
Whereas the [Batch management API to delete a pool](/rest/api/batchmanagement/pool/delete) is targeted at the management.azure.com layer: `DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Batch/batchAccounts/{accountName}/pools/{poolName}`
-## Batch service APIs
+## Batch Service APIs
Your applications and services can issue direct REST API calls or use one or more of the following client libraries to run and manage your Azure Batch workloads.
The Azure Resource Manager APIs for Batch provide programmatic access to Batch a
| API | API reference | Download | Tutorial | Code samples | | | | | | | | **Batch Management REST** |[Azure REST API - Docs](/rest/api/batchmanagement/) |- |- |[GitHub](https://github.com/Azure-Samples/batch-dotnet-manage-batch-accounts) |
-| **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch/management/management-batch(deprecated)) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) |
+| **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/resourcemanager.batch-readme) |[NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://aka.ms/azuresdk-net-mgmt-samples) |
| **Batch Management Python** |[Azure SDK for Python - Docs](/samples/azure-samples/azure-samples-python-management/batch/) |[PyPI](https://pypi.org/project/azure-mgmt-batch/) |- |- | | **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/arm-batch-readme) |[npm](https://www.npmjs.com/package/@azure/arm-batch) |- |- | | **Batch Management Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch/management) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |- |
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md
Title: Use the Azure Compute Gallery to create a custom image pool description: Custom image pools are an efficient way to configure compute nodes to run your Batch workloads. Previously updated : 03/20/2024 Last updated : 06/25/2024 ms.devlang: csharp # ms.devlang: csharp, python
Using a Shared Image configured for your scenario can provide several advantages
- **an Azure Compute Gallery image**. To create a Shared Image, you need to have or create a managed image resource. The image should be created from snapshots of the VM's OS disk and optionally its attached data disks. > [!NOTE]
-> If the Shared Image is not in the same subscription as the Batch account, you must [register the Microsoft.Batch resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) for that subscription. The two subscriptions must be in the same Microsoft Entra tenant.
+> If the Shared Image is not in the same subscription as the Batch account, you must [register the Microsoft.Batch resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) for the subscription that uses the Shared Image. The two subscriptions must be in the same Microsoft Entra tenant.
> > The image can be in a different region as long as it has replicas in the same region as your Batch account.
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
Title: Configure managed identities in Batch pools description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes. Previously updated : 06/18/2024 Last updated : 06/25/2024 ms.devlang: csharp
batch Nodes And Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/nodes-and-pools.md
Title: Nodes and pools in Azure Batch description: Learn about compute nodes and pools and how they are used in an Azure Batch workflow from a development standpoint. Previously updated : 06/13/2024 Last updated : 06/25/2024 # Nodes and pools in Azure Batch
-In an Azure Batch workflow, a *compute node* (or *node*) is a virtual machine that processes a portion of your application's workload. A *pool* is a collection of these nodes for your application to runs on. This article explains more about nodes and pools, along with considerations when creating and using them in an Azure Batch workflow.
+In an Azure Batch workflow, a *compute node* (or *node*) is a virtual machine that processes a portion of your application's workload. A *pool* is a collection of these nodes for your application to run on. This article explains more about nodes and pools, along with considerations when creating and using them in an Azure Batch workflow.
## Nodes
The pool can be created manually, or [automatically by the Batch service](#autop
- [Operating system and version](#operating-system-and-version) - [Configurations](#configurations) - [Virtual Machine Configuration](#virtual-machine-configuration)
- - [Cloud Services Configuration](#cloud-services-configuration)
- [Node Agent SKUs](#node-agent-skus) - [Custom images for Virtual Machine pools](#custom-images-for-virtual-machine-pools) - [Container support in Virtual Machine pools](#container-support-in-virtual-machine-pools)
When you create a Batch pool, you specify the Azure virtual machine configuratio
## Configurations
-There are two types of pool configurations available in Batch.
-
-> [!IMPORTANT]
-> While you can currently create pools using either configuration, new pools should be configured using Virtual Machine Configuration and not Cloud Services Configuration. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
- ### Virtual Machine Configuration The **Virtual Machine Configuration** specifies that the pool is composed of Azure virtual machines. These VMs may be created from either Linux or Windows images.
The **Virtual Machine Configuration** specifies that the pool is composed of Azu
The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) is a program that runs on each node in the pool and provides the command-and-control interface between the node and the Batch service. There are different implementations of the node agent, known as SKUs, for different operating systems. When you create a pool based on the Virtual Machine Configuration, you must specify not only the size of the nodes and the source of the images used to create them, but also the **virtual machine image reference** and the Batch **node agent SKU** to be installed on the nodes. For more information about specifying these pool properties, see [Provision Linux compute nodes in Azure Batch pools](batch-linux-nodes.md). You can optionally attach one or more empty data disks to pool VMs created from Marketplace images, or include data disks in custom images used to create the VMs. When including data disks, you need to mount and format the disks from within a VM to use them.
-### Cloud Services Configuration
-
-> [!WARNING]
-> Cloud Services Configuration pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Please use Virtual Machine Configuration pools instead. For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
-
-The **Cloud Services Configuration** specifies that the pool is composed of Azure Cloud Services nodes. Cloud Services provides only Windows compute nodes.
-
-Available operating systems for Cloud Services Configuration pools are listed in the [Azure Guest OS releases and SDK compatibility matrix](../cloud-services/cloud-services-guestos-update-matrix.md), and available compute node sizes are listed in [Sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md). When you create a pool that contains Cloud Services nodes, you specify the node size and its *OS Family* (which determines which versions of .NET are installed with the OS). Cloud Services is deployed to Azure more quickly than virtual machines running Windows. If you want pools of Windows compute nodes, you may find that Cloud Services provide a performance benefit in terms of deployment time.
-
-As with worker roles within Cloud Services, you can specify an *OS Version*. We recommend that you specify `Latest (*)` for the *OS Version* so that the nodes are automatically upgraded, and there is no work required to cater to newly released versions. The primary use case for selecting a specific OS version is to ensure application compatibility, which allows backward compatibility testing to be performed before allowing the version to be updated. After validation, the *OS Version* for the pool can be updated and the new OS image can be installed. Any running tasks will be interrupted and requeued.
- ### Node Agent SKUs When you create a pool, you need to select the appropriate **nodeAgentSkuId**, depending on the OS of the base image of your VHD. You can get a mapping of available node agent SKU IDs to their OS Image references by calling the [List Supported Node Agent SKUs](/rest/api/batchservice/list-supported-node-agent-skus) operation.
batch Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-bicep.md
Get started with Azure Batch by using a Bicep file to create a Batch account, in
After completing this quickstart, you'll understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale. ## Prerequisites You must have an active Azure subscription. -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
## Review the Bicep file
batch Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-cli.md
After you complete this quickstart, you understand the [key concepts of the Batc
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Azure Cloud Shell or Azure CLI.
batch Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-portal.md
After you complete this quickstart, you understand the [key concepts of the Batc
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
>[!NOTE] >For some regions and subscription types, quota restrictions might cause Batch account or node creation to fail or not complete. In this situation, you can request a quota increase at no charge. For more information, see [Batch service quotas and limits](batch-quota-limit.md).
batch Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-template.md
Get started with Azure Batch by using an Azure Resource Manager template (ARM te
After completing this quickstart, you'll understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
If your environment meets the prerequisites and you're familiar with using ARM t
You must have an active Azure subscription. -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
## Review the template
batch Batch Cli Sample Add Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-add-application.md
keywords: batch, azure cli samples, azure cli code samples, azure cli script sam
This script demonstrates how to add an application for use with an Azure Batch pool or task. To set up an application to add to your Batch account, package your executable, together with any dependencies, into a zip file. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Create batch account and new application
az batch application set \
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-account.md
keywords: batch, azure cli samples, azure cli code samples, azure cli script sam
This script creates an Azure Batch account in Batch service mode and shows how to query or update various properties of the account. When you create a Batch account in the default Batch service mode, its compute nodes are assigned internally by the Batch service. Allocated compute nodes are subject to a separate vCPU (core) quota and the account can be authenticated either via shared key credentials or a Microsoft Entra token. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
service. Allocated compute nodes are subject to a separate vCPU (core) quota and
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Create User Subscription Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-user-subscription-account.md
keywords: batch, azure cli samples, azure cli examples, azure cli code samples
This script creates an Azure Batch account in user subscription mode. An account that allocates compute nodes into your subscription must be authenticated via a Microsoft Entra token. The compute nodes allocated count toward your subscription's vCPU (core) quota. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This script creates an Azure Batch account in user subscription mode. An account
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Manage Linux Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-linux-pool.md
keywords: linux, azure cli samples, azure cli code samples, azure cli script sam
This script demonstrates some of the commands available in the Azure CLI to create and manage a pool of Linux compute nodes in Azure Batch. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### To create a Linux pool in Azure Batch
az batch node delete \
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Manage Windows Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-windows-pool.md
keywords: windows pool, azure cli samples, azure cli code samples, azure cli scr
This script demonstrates some of the commands available in the Azure CLI to create and manage a pool of Windows compute nodes in Azure Batch. A Windows pool can be configured in two ways, with either a Cloud Services configuration or a Virtual Machine configuration. This example shows how to create a Windows pool with the Cloud Services configuration. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
manage a pool of Windows compute nodes in Azure Batch. A Windows pool can be con
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Run Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-run-job.md
keywords: batch, batch job, monitor job, azure cli samples, azure cli code sampl
This script creates a Batch job and adds a series of tasks to the job. It also demonstrates how to monitor a job and its tasks. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Create a Batch account in Batch service mode
az batch task show \
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Tutorial Parallel Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-dotnet.md
Use Azure Batch to run large-scale parallel and high-performance computing (HPC)
In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org) open-source tool. ## Prerequisites
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
Use Azure Batch to run large-scale parallel and high-performance computing (HPC)
In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org/) open-source tool. ## Prerequisites
cdn Cdn Add To Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-add-to-web-app.md
To complete this tutorial:
- [Install Git](https://git-scm.com/) - [Install the Azure CLI](/cli/azure/install-azure-cli) ## Create the web app
cdn Cdn Azure Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-azure-diagnostic-logs.md
To use an event hub for the logs, follow these steps:
The following example shows how to enable diagnostic logs via the Azure PowerShell Cmdlets. ### Enable diagnostic logs in a storage account
cdn Cdn Caching Rules Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules-tutorial.md
In this tutorial, you learn how to:
> - Create a global caching rule. > - Create a custom caching rule. ## Prerequisites
cdn Cdn Create Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-endpoint-how-to.md
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
4. For **Origin type**, choose one of the following origin types: - **Storage** for Azure Storage
+ - **Storage static website** for Azure Storage static websites
- **Cloud service** for Azure Cloud Services - **Web App** for Azure Web Apps - **Custom origin** for any other publicly accessible origin web server (hosted in Azure or elsewhere)
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
In this tutorial, you learn how to:
## Prerequisites Before you can complete the steps in this tutorial, create a CDN profile and at least one CDN endpoint. For more information, see [Quickstart: Create an Azure CDN profile and endpoint](cdn-create-new-endpoint.md).
cdn Cdn Manage Expiration Of Blob Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-blob-content.md
The preferred method for setting a blob's `Cache-Control` header is to use cachi
## Setting Cache-Control headers by using Azure PowerShell [Azure PowerShell](/powershell/azure/) is one of the quickest and most powerful ways to administer your Azure services. Use the `Get-AzStorageBlob` cmdlet to get a reference to the blob, then set the `.ICloudBlob.Properties.CacheControl` property.
cdn Cdn Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-powershell.md
PowerShell provides one of the most flexible methods to manage your Azure Conten
## Prerequisites To use PowerShell to manage your Azure Content Delivery Network profiles and endpoints, you must have the Azure PowerShell module installed. To learn how to install Azure PowerShell and connect to Azure using the `Connect-AzAccount` cmdlet, see [How to install and configure Azure PowerShell](/powershell/azure/).
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-map-content-to-custom-domain.md
In this tutorial, you learn how to:
> - Add a custom domain with your content delivery network endpoint. > - Verify the custom domain. ## Prerequisites
cdn Cdn Storage Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-storage-custom-domain-https.md
Previously updated : 03/20/2024 Last updated : 06/26/2024
In the above rule, leaving Hostname, Path, Query string, and Fragment results in
![Edgio redirect rule](./media/cdn-storage-custom-domain-https/cdn-url-redirect-rule.png)
-In the above rule, *Cdn-endpoint-name* refers to the name that you configured for your content delivery network endpoint, which you can select from the dropdown list. The value for *origin-path* refers to the path within your origin storage account where your static content resides. If you're hosting all static content in a single container, replace *origin-path* with the name of that container.
+In the above rule, *Cdn-endpoint-name* refers to the name that you configured for your content delivery network endpoint. The value for *origin-path* refers to the path within your origin storage account where your static content resides. If you're hosting all static content in a single container, replace *origin-path* with the name of that container.
## Pricing and billing
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
Get started with Azure Content Delivery Network by using a Bicep file. The Bicep file deploys a profile and an endpoint. ## Prerequisites ## Review the Bicep file
cdn Create Profile Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-template.md
Get started with Azure Content Delivery Network by using an Azure Resource Manager template (ARM template). The template deploys a profile and an endpoint. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites ## Review the template
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
Use [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsett
Retention data is defined by the **-RetentionInDays** option in the command. ### Enable diagnostic logs in a storage account
chaos-studio Chaos Studio Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-bicep.md
# Use Bicep to create an experiment in Azure Chaos Studio This article includes a sample Bicep file to get started in Azure Chaos Studio, including:
chaos-studio Chaos Studio Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md
Chaos Studio has the following operations:
| Microsoft.Chaos/experiments/start/action | Start a chaos experiment. | | Microsoft.Chaos/experiments/cancel/action | Stop a chaos experiment. | | Microsoft.Chaos/experiments/executions/Read | Get the execution status for a run of a chaos experiment. |
-| Microsoft.Chaos/experiments/getExecutionDetails/action | Get the execution details (status and errors for each action) for a run of a chaos experiment. |
+| Microsoft.Chaos/experiments/executions/getExecutionDetails/action | Get the execution details (status and errors for each action) for a run of a chaos experiment. |
To assign these permissions granularly, you can [create a custom role](../role-based-access-control/custom-roles.md).
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Get started with Azure Chaos Studio by using a virtual machine (VM) shutdown service-direct experiment to make your service more resilient to that failure in real-world scenarios. ## Prerequisites-- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A Linux VM running an operating system in the [Azure Chaos Studio version compatibility](chaos-studio-versions.md) list. If you don't have a VM, [follow these steps to create one](../virtual-machines/linux/quick-create-portal.md). ## Register the Chaos Studio resource provider
chaos-studio Chaos Studio Tutorial Aad Outage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aad-outage-portal.md
You can use a chaos experiment to verify that your application is resilient to f
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A network security group. ## Enable Chaos Studio on your network security group
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
You can use these same steps to set up and run an experiment for any agent-based
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A virtual machine running an operating system in the [version compatibility](chaos-studio-versions.md) list. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md). - A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). - A user-assigned managed identity. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
chaos-studio Chaos Studio Tutorial Agent Based Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md
You can use these same steps to set up and run an experiment for any agent-based
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A Linux VM running an operating system in the [version compatibility](chaos-studio-versions.md) list. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md). - A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). - A user-assigned managed identity *that was assigned to the target VM or virtual machine scale set*. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source cha
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An AKS cluster with Linux node pools. If you don't have an AKS cluster, see the AKS quickstart that uses the [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or the [Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md). ## Limitations
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source cha
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An AKS cluster with a Linux node pool. If you don't have an AKS cluster, see the AKS quickstart that uses the [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or the [Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md). ## Limitations
chaos-studio Chaos Studio Tutorial Availability Zone Down Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-availability-zone-down-portal.md
You can use a chaos experiment to verify that your application is resilient to f
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A Virtual Machine Scale Sets instance. - An Autoscale Settings instance.
chaos-studio Chaos Studio Tutorial Dynamic Target Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-cli.md
You can use these same steps to set up and run an experiment for any fault that
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An Azure Virtual Machine Scale Sets instance. ## Open Azure Cloud Shell
chaos-studio Chaos Studio Tutorial Dynamic Target Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-portal.md
You can use these same steps to set up and run an experiment for any fault that
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An Azure Virtual Machine Scale Sets instance. ## Enable Chaos Studio on your virtual machine scale sets
chaos-studio Chaos Studio Tutorial Service Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-cli.md
You can use these same steps to set up and run an experiment for any service-dir
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, you can [create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md). - At least one read and one write region setup for your Azure Cosmos DB account.
chaos-studio Chaos Studio Tutorial Service Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-portal.md
You can use these same steps to set up and run an experiment for any service-dir
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, follow these steps to [create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md). - At least one read and one write region setup for your Azure Cosmos DB account.
cloud-services Cloud Services Allocation Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-allocation-failures.md
When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources. You may occasionally receive errors when performing these operations even before you reach the Azure subscription limits. This article explains the causes of some of the common allocation failures and suggests possible remediation. The information may also be useful when you plan the deployment of your services. ### Background ΓÇô How allocation works
cloud-services Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md
This article discusses some of the common causes of deployment problems and provides troubleshooting tips to help you resolve these problems. An indication that a problem exists with an application is when the role instance fails to start, or it cycles between the initializing, busy, and stopping states. ## Missing runtime dependencies If a role in your application relies on any assembly that is not part of the .NET Framework or the Azure managed library, you must explicitly include that assembly in the application package. Keep in mind that other Microsoft frameworks are not available on Azure by default. If your role relies on such a framework, you must add those assemblies to the application package.
cloud-services Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md
The default temporary directory of a cloud service worker or web role has a maximum size of 100 MB, which may become full at some point. This article describes how to avoid running out of space for the temporary directory. ## Why do I run out of space? The standard Windows environment variables TEMP and TMP are available to code that is running in your application. Both TEMP and TMP point to a single directory that has a maximum size of 100 MB. Any data that is stored in this directory is not persisted across the lifecycle of the cloud service; if the role instances in a cloud service are recycled, the directory is cleaned.
cloud-services Cloud Services Troubleshoot Deployment Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md
You can find the **Properties** pane as follows:
> > ## Problem: I cannot access my website, but my deployment is started and all role instances are ready The website URL link shown in the portal does not include the port. The default port for websites is 80. If your application is configured to run in a different port, you must add the correct port number to the URL when accessing the website.
cloud-services Cloud Services Troubleshoot Roles That Fail Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-roles-that-fail-start.md
Here are some common problems and solutions related to Azure Cloud Services roles that fail to start. ## Missing DLLs or dependencies Unresponsive roles and roles that are cycling between **Initializing**, **Busy**, and **Stopping** states can be caused by missing DLLs or assemblies.
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features that are currently available in
| | Transfer a call to a user | ✔️ | ✔️ | ✔️ | ✔️ | | | Be transferred to a user or call | ✔️ | ✔️ | ✔️ | ✔️ | | | Transfer a call to a call | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Transfer a call to Voicemail | ❌ | ❌ | ❌ | ❌ |
+| | Transfer a call to Voicemail | ✔️ | ✔️ | ✔️ | ✔️ |
| | Be transferred to voicemail | ✔️ | ✔️ | ✔️ | ✔️ | | | Merge ongoing calls | ❌ | ❌ | ❌ | ❌ | | | Does start a call and add user operations honor shared line configuration | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Transfer Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/transfer-calls.md
# Transfer calls
-During an active call, you may want to transfer the call to another person or number. Let's learn how.
+During an active call, you may want to transfer the call to another person, number, or to voicemail. Let's learn how.
## Prerequisites
confidential-computing Quick Create Confidential Vm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli.md
For this step you need to be a Global Admin or you need to have the User Access
``` 2. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault. ```azurecli-interactive
- az keyvault create -n keyVaultName -g myResourceGroup --enabled-for-disk-encryption true --sku premium --enable-purge-protection true
+ az keyvault create -n keyVaultName -g myResourceGroup --enabled-for-disk-encryption true --sku premium --enable-purge-protection true --enable-rbac-authorization false
``` 3. Give `Confidential VM Orchestrator` permissions to `get` and `release` the key vault. ```Powershell
confidential-computing Quick Create Confidential Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal.md
To create a confidential VM in the Azure portal using an Azure Marketplace image
q. Go to the disk encryption set resource in the Azure portal.
- r. Select the pink banner to grant permissions to Azure Key Vault.
+ r. When you see a blue info banner, please follow the instructions provided to grant access. On encountering a pink banner, simply select it to grant the necessary permissions to Azure Key Vault.
> [!IMPORTANT] > You must perform this step to successfully create the confidential VM.
confidential-ledger Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-cli.md
Azure confidential ledger is a cloud service that provides a high integrity stor
For more information on Azure confidential ledger and examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
confidential-ledger Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-portal.md
Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that require data to be kept intact. For more information on Azure confidential ledger and examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md). In this quickstart, you create a confidential ledger with the [Azure portal](https://portal.azure.com).
confidential-ledger Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-powershell.md
Azure confidential ledger is a cloud service that provides a high integrity stor
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Get started with the Microsoft Azure confidential ledger client library for Pyth
Microsoft Azure confidential ledger is a new and highly secure service for managing sensitive data records. Based on a permissioned blockchain model, Azure confidential ledger offers unique data integrity advantages, such as immutability (making the ledger append-only) and tamper proofing (to ensure all records are kept intact). [API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-confidentialledger/latest/azure.confidentialledger.html) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger) | [Package (Python Package Index) Management Library](https://pypi.org/project/azure-mgmt-confidentialledger/)| [Package (Python Package Index) Client Library](https://pypi.org/project/azure-confidentialledger/)
confidential-ledger Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-template.md
Last updated 01/30/2024
[Microsoft Azure confidential ledger](overview.md) is a new and highly secure service for managing sensitive data records. This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a new ledger. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
connectors Connectors Create Api Azure Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azure-event-hubs.md
Last updated 01/04/2024
# Connect to an event hub from workflows in Azure Logic Apps The Azure Event Hubs connector helps you connect your logic app workflows to event hubs in Azure. You can then have your workflows monitor and manage events that are sent to an event hub. For example, your workflow can check, send, and receive events from your event hub. This article provides a get started guide to using the Azure Event Hubs connector by showing how to connect to an event hub and add an Event Hubs trigger or action to your workflow.
connectors Connectors Create Api Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-container-instances.md
Last updated 01/04/2024
# Deploy and manage Azure Container Instances by using Azure Logic Apps With Azure Logic Apps and the Azure Container Instance connector, you can set up automated tasks and workflows that deploy and manage [container groups](../container-instances/container-instances-container-groups.md). The Container Instance connector supports the following actions:
connectors Connectors Create Api Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-db2.md
Last updated 01/04/2024
# Access and manage IBM DB2 resources by using Azure Logic Apps With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [IBM DB2 connector](/connectors/db2/), you can create automated
connectors Connectors Create Api Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-informix.md
Last updated 01/04/2024
# Manage IBM Informix database resources by using Azure Logic Apps With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Informix connector](/connectors/informix/), you can create automated tasks and workflows that manage resources in an IBM Informix database. This connector includes a Microsoft client that communicates with remote Informix server computers across a TCP/IP network, including cloud-based databases such as IBM Informix for Windows running in Azure virtualization and on-premises databases when you use the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). You can connect to these Informix platforms and versions if they are configured to support Distributed Relational Database Architecture (DRDA) client connections:
connectors Connectors Create Api Smtp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-smtp.md
Last updated 01/04/2024
# Connect to your SMTP account from Azure Logic Apps With Azure Logic Apps and the Simple Mail Transfer Protocol (SMTP) connector, you can create automated tasks and workflows that send email from your SMTP account.
connectors Connectors Integrate Security Operations Create Api Microsoft Graph Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md
Last updated 01/04/2024
# Improve threat protection by integrating security operations with Microsoft Graph Security & Azure Logic Apps With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Microsoft Graph Security](/graph/security-concept-overview) connector, you can improve how your app detects, protects, and responds to threats by creating automated workflows for integrating Microsoft security products, services, and partners. For example, you can create [Microsoft Defender for Cloud playbooks](../security-center/workflow-automation.yml) that monitor and manage Microsoft Graph Security entities, such as alerts. Here are some scenarios that are supported by the Microsoft Graph Security connector:
connectors Connectors Native Delay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-delay.md
Last updated 01/04/2024
# Delay running the next action in Azure Logic Apps To have your logic app wait an amount of time before running the next action, you can add the built-in **Delay** action before an action in your logic app's workflow. Or, you can add the built-in **Delay until** action to wait until a specific date and time before running the next action. For more information about the built-in Schedule actions and triggers, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
connectors Connectors Native Sliding Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-sliding-window.md
Last updated 01/04/2024
# Schedule and run tasks for contiguous data by using the Sliding Window trigger in Azure Logic Apps To regularly run tasks, processes, or jobs that must handle data in contiguous chunks, you can start your logic app workflow with the **Sliding Window** trigger. You can set a date and time as well as a time zone for starting the workflow and a recurrence for repeating that workflow. If recurrences are missed for any reason, for example, due to disruptions or disabled workflows, this trigger processes those missed recurrences. For example, when synchronizing data between your database and backup storage, use the Sliding Window trigger so that the data gets synchronized without incurring gaps. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
User-assigned identities are ideal for workloads that:
## Limitations -- Managed identities in scale rules isn't supported. You need to include connection strings or keys in the `secretRef` of the scaling rule.-- [Init containers](containers.md#init-containers) can't access managed identities.
+[Init containers](containers.md#init-containers) can't access managed identities in [consumption-only environments](environment.md#types) and [dedicated workload profile environments](environment.md#types)
## Configure managed identities
To get a token for a resource, make an HTTP `GET` request to the endpoint, inclu
+## Use managed identity for scale rules
+
+Starting in API version `2024-02-02-preview`, you can use managed identities in your scale rules to authenticate with Azure services that support managed identities. To use a managed identity in your scale rule, use the `identity` property instead of the `auth` property in your scale rule. Acceptable values for the `identity` property are either the Azure resource ID of a user-assigned identity, or `system` to use a system-assigned identity
+
+The following example shows how to use a managed identities with an Azure Queue Storage scale rule. The queue storage account uses the `accountName` property to identify the storage account, while the `identity` property specifies which managed identity to use. You do not need to use the `auth` property.
+
+```json
+"scale": {
+ "minReplicas": 1,
+ "maxReplicas": 10,
+ "rules": [{
+ "name": "myQueueRule",
+ "azureQueue": {
+ "accountName": "mystorageaccount",
+ "queueName": "myqueue",
+ "queueLength": 2,
+ "identity": "<IDENTITY1_RESOURCE_ID>"
+ }
+ }]
+}
+```
+
+## Control managed identity availability
+
+Container Apps allow you to specify [init containers](containers.md#init-containers) and main containers. By default, both main and init containers in a consumption workload profile environment can use managed identity to access other Azure services. In consumption-only environments and dedicated workload profile environments, only main containers can use managed identity. Managed identity access tokens are available for every managed identity configured on the container app. However, in some situations only the init container or the main container require access tokens for a managed identity. Other times, you may use a managed identity only to access your Azure Container Registry to pull the container image, and your application itself doesn't need to have access to your Azure Container Registry.
+
+Starting in API version `2024-02-02-preview`, you can control which managed identities are available to your container app during the init and main phases to follow the security principle of least privilege. The following options are available:
+
+- `Init`: available only to init containers. Use this when you want to perform some intilization work that requires a managed identity, but you no longer need the managed identity in the main container. This option is currently only supported in [workload profile consumption environments](environment.md#types)
+- `Main`: available only to main containers. Use this if your init container does not need managed identity.
+- `All`: available to all containers. This is the default setting.
+- `None`: not available to any containers. Use this when you have a managed identity that is only used for ACR image pull, scale rules, or Key Vault secrets and does not need to be available to the code running in your containers.
+
+The following example shows how to configure a container app on a workload profile consumption environment that:
+
+- Restricts the container app's system-assigned identity to main containers only.
+- Restricts a specific user-assigned identity to init containers only.
+- Uses a specific user-assigned identity for Azure Container Registry image pull without allowing the code in the containers to use that managed identity to access the registry. In this example, the containers themselves don't need to access the registry.
+
+This approach limits the resources that can be accessed if a malicious actor were to gain unauthorized access to the containers.
+
+```json
+{
+ "location": "eastus2",
+ "identity":{
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "<IDENTITY1_RESOURCE_ID>":{},
+ "<ACR_IMAGEPULL_IDENTITY_RESOURCE_ID>":{}
+ }
+ },
+ "properties": {
+ "workloadProfileName":"Consumption",
+ "environmentId": "<CONTAINER_APPS_ENVIRONMENT_ID>",
+ "configuration": {
+ "registries": [
+ {
+ "server": "myregistry.azurecr.io",
+ "identity": "ACR_IMAGEPULL_IDENTITY_RESOURCE_ID"
+ }],
+ "identitySettings":[
+ {
+ "identity": "ACR_IMAGEPULL_IDENTITY_RESOURCE_ID",
+ "lifecycle": "none"
+ },
+ {
+ "identity": "<IDENTITY1_RESOURCE_ID>",
+ "lifecycle": "init"
+ },
+ {
+ "identity": "system",
+ "lifecycle": "main"
+ }]
+ },
+ "template": {
+ "containers":[
+ {
+ "image":"myregistry.azurecr.io/main:1.0",
+ "name":"app-main"
+ }
+ ],
+ "initContainers":[
+ {
+ "image":"myregistry.azurecr.io/init:1.0",
+ "name":"app-init",
+ }
+ ]
+ }
+ }
+}
+```
+ ## View managed identities You can show the system-assigned and user-assigned managed identities using the following Azure CLI command. The output shows the managed identity type, tenant IDs and principal IDs of all managed identities assigned to your container app.
container-apps Sessions Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-custom-container.md
Previously updated : 05/06/2024 Last updated : 06/26/2024
In addition to the built-in code interpreter that Azure Container Apps dynamic s
## Uses for custom container sessions
-Custom containers allow you to build solutions tailored to your needs. They enable you to execute code or applications in environments that are fast and ephemeral and offer secure, sandboxed spaces with Hyper-V. Additionally, they can be configured with optional network isolation. Some examples include:
+Custom containers allow you to build solutions tailored to your needs. They enable you to execute code or run applications in environments that are fast and ephemeral and offer secure, sandboxed spaces with Hyper-V. Additionally, they can be configured with optional network isolation. Some examples include:
* **Code interpreters**: When you need to execute untrusted code in secure sandboxes by a language not supported in the built-in interpreter, or you need full control over the code interpreter environment.
-* **Isolated execution**: When you need to run applications in hostile, multitenant scenarios where each tenant or user has their own sandboxed environment. These environments are isolated from each other and from the host application. Some examples include applications that run user-provided code, code that grants end user access to a cloud-based shell, and development environments.
+* **Isolated execution**: When you need to run applications in hostile, multitenant scenarios where each tenant or user has their own sandboxed environment. These environments are isolated from each other and from the host application. Some examples include applications that run user-provided code, code that grants end user access to a cloud-based shell, AI agents, and development environments.
## Using custom container sessions
When your application requests a session, an instance is instantly allocated fro
To create a custom container session pool, you need to provide a container image and pool configuration settings.
-You communicate with each session using HTTP requests. The custom container must expose an HTTP server on a port that you specify to respond to these requests.
+You invoke or communicate with each session using HTTP requests. The custom container must expose an HTTP server on a port that you specify to respond to these requests.
# [Azure CLI](#tab/azure-cli)
Your application interacts with a session using the session pool's management AP
A pool management endpoint for custom container sessions follows this format: `https://<SESSION_POOL>.<ENVIRONMENT_ID>.<REGION>.azurecontainerapps.io`. To retrieve the session pool's management endpoint, use the `az containerapp sessionpool show` command:- ```bash az containerapp sessionpool show \ --name <SESSION_POOL_NAME> \
az containerapp sessionpool show \
All requests to the pool management endpoint must include an `Authorization` header with a bearer token. To learn how to authenticate with the pool management API, see [Authentication](sessions.md#authentication).
-Every request to the API requires query string parameter of `identifier` with value of the session ID. The session ID is a unique identifier for the session that allows you to interact with specific sessions. To learn more about session identifiers, see [Session identifiers](sessions.md#session-identifiers).
+Each API request must also include the query string parameter `identifier` with the session ID. This unique session ID enables your application to interact with specific sessions. To learn more about session identifiers, see [Session identifiers](sessions.md#session-identifiers).
+
+> [!IMPORTANT]
+> The session identifier is sensitive information which requires a secure process as you create and manage its value. To protect this value, your application must ensure each user or tenant only has access to their own sessions.
+> Failure to secure access to sessions may result in misuse or unauthorized access to data stored in your users' sessions. For more information, see [Session identifiers](sessions.md#session-identifiers)
+
+#### Forwarding requests to the session's container:
+
+Anything in the path following the base pool management endpoint is forwarded to the session's container.
+
+For example, if you make a call to `<POOL_MANAGEMENT_ENDPOINT>/api/uploadfile`, the request is routed to the session's container at `0.0.0.0:<TARGET_PORT>/api/uploadfile`.
+
+#### Continuous session interaction:
+
+You can continue making requests to the same session. If there are no requests to the session for longer than the cooldown period, the session is automatically deleted.
+
+#### Sample request
The following example shows a request to a custom container session by a user ID. Before you send the request, replace the placeholders between the `<>` brackets with values specific to your request. ```http
-POST https://<SESSION_POOL_NAME>.<ENVIRONMENT_ID>.<REGION>.azurecontainerapps.io/api/execute-command?identifier=<USER_ID>
+POST https://<SESSION_POOL_NAME>.<ENVIRONMENT_ID>.<REGION>.azurecontainerapps.io/<API_PATH_EXPOSED_BY_CONTAINER>?identifier=<USER_ID>
Authorization: Bearer <TOKEN>- { "command": "echo 'Hello, world!'" }
Authorization: Bearer <TOKEN>
This request is forwarded to the custom container session with the identifier for the user's ID. If the session isn't already running, Azure Container Apps allocates a session from the pool before forwarding the request.
-In the example, the session's container receives the request at `http://0.0.0.0:<INGRESS_PORT>/api/execute-command`.
+In the example, the session's container receives the request at `http://0.0.0.0:<INGRESS_PORT>/<API_PATH_EXPOSED_BY_CONTAINER>`.
## Next steps
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
What you will learn:
> * Create a PostgreSQL database in Azure. > * Connect to a PostgreSQL Database with managed identity using Service Connector. ## 1. Prerequisites
container-instances Container Instances Egress Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-egress-ip-address.md
In this article, you use the Azure CLI to create the resources for this scenario
You then validate ingress and egress from example container groups through the firewall. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] > [!NOTE] > To download the complete script, go to [full script](https://github.com/Azure-Samples/azure-cli-samples/blob/master/container-instances/egress-ip-address.sh).
container-instances Container Instances Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-environment-variables.md
For example, if you run the Microsoft [aci-wordcount][aci-wordcount] container i
If you need to pass secrets as environment variables, Azure Container Instances supports [secure values](#secure-values) for both Windows and Linux containers. ## Azure CLI example
container-instances Container Instances Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-log-analytics.md
To send container group log and event data to Azure Monitor logs, specify an exi
The following sections describe how to create a logging-enabled container group and how to query logs. You can also [update a container group](container-instances-update.md) with a workspace ID and workspace key to enable logging. ## Prerequisites
container-instances Container Instances Multi Container Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-multi-container-group.md
A Resource Manager template can be readily adapted for scenarios when you need t
> [!NOTE] > Multi-container groups are currently restricted to Linux containers. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-instances Container Instances Multi Container Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-multi-container-yaml.md
In this tutorial, you follow steps to run a simple two-container sidecar configu
> [!NOTE] > Multi-container groups are currently restricted to Linux containers. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-instances Container Instances Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-nat-gateway.md
You then validate egress from example container groups through the NAT gateway.
> [!NOTE] > The ACI service recommends integrating with a NAT gateway for containerized workloads that have static egress but not static ingress requirements. For ACI architecture that supports both static ingress and egress, please see the following tutorial: [Use Azure Firewall for ingress and egress](container-instances-egress-ip-address.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] > [!NOTE] > To download the complete script, go to [full script](https://github.com/Azure-Samples/azure-cli-samples/blob/master/container-instances/nat-gateway.sh).
container-instances Container Instances Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-bicep.md
Use Azure Container Instances to run serverless Docker containers in Azure with simplicity and speed. Deploy an application to a container instance on-demand when you don't need a full container orchestration platform like Azure Kubernetes Service. In this quickstart, you use a Bicep file to deploy an isolated Docker container and make its web application available with a public IP address. ## Prerequisites
container-instances Container Instances Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-powershell.md
In this quickstart, you use Azure PowerShell to deploy an isolated Windows conta
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
container-instances Container Instances Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-template.md
Use Azure Container Instances to run serverless Docker containers in Azure with simplicity and speed. Deploy an application to a container instance on-demand when you don't need a full container orchestration platform like Azure Kubernetes Service. In this quickstart, you use an Azure Resource Manager template (ARM template) to deploy an isolated Docker container and make its web application available with a public IP address. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
container-instances Container Instances Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart.md
In this quickstart, you use the Azure CLI to deploy an isolated Docker container
![View an app deployed to Azure Container Instances in browser][aci-app-browser] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-instances Container Instances Volume Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-volume-azure-files.md
By default, Azure Container Instances are stateless. If the container is restart
## Limitations
+* Azure Storage doesn't support SMB mounting of file share using managed identity
* You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the [overview](container-instances-overview.md#linux-and-windows-containers). * Azure file share volume mount requires the Linux container run as *root* . * Azure File share volume mounts are limited to CIFS support.
container-registry Container Registry Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-cache.md
Artifact cache currently supports the following upstream registries:
| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | AWS Elastic Container Registry (ECR) Public Gallery | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | GitHub Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal |
-| Nvidia | Supports both authenticated and unauthenticated pulls. | Azure CLI |
| Quay | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | | registry.k8s.io | Supports both authenticated and unauthenticated pulls. | Azure CLI | | Google Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI |
container-registry Container Registry Event Grid Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-event-grid-quickstart.md
After you complete the steps in this article, events sent from your container re
![Web browser rendering the sample web application with three received events][sample-app-01] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-registry Container Registry Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-bicep.md
This quickstart shows how to create an Azure Container Registry instance by using a Bicep file. ## Prerequisites
container-registry Container Registry Get Started Geo Replication Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-geo-replication-template.md
This quickstart shows how to create an Azure Container Registry instance by using an Azure Resource Manager template (ARM template). The template sets up a [geo-replicated](container-registry-geo-replication.md) registry, which automatically synchronizes registry content across more than one Azure region. Geo-replication enables network-close access to images from regional deployments, while providing a single management experience. It's a feature of the [Premium](container-registry-skus.md) registry service tier. The registry with replications does not support the ARM/Bicep template Complete mode deployments.
container-registry Container Registry Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-powershell.md
Azure Container Registry is a private registry service for building, storing, an
## Prerequisites This quickstart requires Azure PowerShell module. Run `Get-Module -ListAvailable Az` to determine your installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
container-registry Container Registry Image Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-lock.md
az acr repository update \
--delete-enabled true --write-enabled true ```
-To restore the default behavior of the *myrepo* repository and all images so that they can be deleted and updated, run the following command:
+To restore the default behavior of the *myrepo* repository, enabling individual images to be deleted and updated, run the following command:
```azurecli az acr repository update \
az acr repository update \
--delete-enabled true --write-enabled true ```
+However, if there is a lock on the manifest, you need to run an additional command to unlock the manifest.
+
+```azurecli
+az acr repository update \
+ --name myregistry --image $repo@$digest \
+ --delete-enabled true --write-enabled true
+```
+ ## Next steps In this article, you learned about using the [az acr repository update][az-acr-repository-update] command to prevent deletion or updating of image versions in a repository. To set additional attributes, see the [az acr repository update][az-acr-repository-update] command reference.
container-registry Container Registry Quickstart Task Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md
In this quickstart, you use [Azure Container Registry Tasks][container-registry-
After this quickstart, explore more advanced features of ACR Tasks using the [tutorials](container-registry-tutorial-quick-task.md). ACR Tasks can automate image builds based on code commits or base image updates, or test multiple containers, in parallel, among other scenarios. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-registry Troubleshoot Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-cache.md
Artifact cache currently supports the following upstream registries:
| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | AWS Elastic Container Registry (ECR) Public Gallery | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | GitHub Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal |
-| Nvidia | Supports both authenticated and unauthenticated pulls. | Azure CLI |
| Quay | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | | registry.k8s.io | Supports both authenticated and unauthenticated pulls. | Azure CLI | | Google Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI |
cosmos-db Ai Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-agents.md
+
+ Title: AI agents
+description: AI agent key concepts and implementation of AI agent memory system.
+++++ Last updated : 06/26/2024++
+# AI agents
+
+AI agents are designed to perform specific tasks, answer questions, and automate processes for users. These agents vary widely in complexity, ranging from simple chatbots, to copilots, to advanced AI assistants in the form of digital or robotic systems that can execute complex workflows autonomously. This article provides conceptual overviews and detailed implementation samples on AI agents.
+
+## What are AI Agents?
+
+Unlike standalone large language models (LLMs) or rule-based software/hardware systems, AI agents possess the follow common features:
+
+- [Planning](#reasoning-and-planning). AI agents can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.
+- [Tool usage](#frameworks). Advanced AI agents can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling.
+- [Perception](#frameworks). AI agents can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.
+- [Memory](#agent-memory-system). AI agents possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
+
+> [!NOTE]
+> The usage of the term "memory" in the context of AI agents should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
+
+### Copilots
+
+Copilots are a type of AI agent designed to work alongside users rather than operate independently. Unlike fully automated agents, copilots provide suggestions and recommendations to assist users in completing tasks. For instance, when a user is writing an email, a copilot might suggest phrases, sentences, or paragraphs. The user might also ask the copilot to find relevant information in other emails or files to support the suggestion (see [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation)). The user can accept, reject, or edit the suggested passages.
+
+### Autonomous agents
+
+Autonomous agents can operate more independently. When you set up autonomous agents to assist with email composition, you could enable them to perform the following tasks:
+
+- Consult existing emails, chats, files, and other internal and public information that are related to the subject matter
+- Perform qualitative or quantitative analysis on the collected information, and draw conclusions that are relevant to the email
+- Write the complete email based on the conclusions and incorporate supporting evidence
+- Attach relevant files to the email
+- Review the email to ensure that all the incorporated information is factually accurate, and that the assertions are valid
+- Select the appropriate recipients for "To," "Cc," and/or "Bcc" and look up their email addresses
+- Schedule an appropriate time to send the email
+- Perform follow-ups if responses are expected but not received
+
+You may configure the agents to perform each of the above steps with or without human approval.
+
+### Multi-agent systems
+
+Currently, the prevailing strategy for achieving performant autonomous agents is through multi-agent systems. In multi-agent systems, multiple autonomous agents, whether in digital or robotic form, interact or work together to achieve individual or collective goals. Agents in the system can operate independently and possess their own knowledge or information. Each agent may also have the capability to perceive its environment, make decisions, and execute actions based on its objectives.
+
+Key characteristics of multi-agent systems:
+
+- Autonomous: Each agent functions independently, making its own decisions without direct human intervention or control by other agents.
+- Interactive: Agents communicate and collaborate with each other to share information, negotiate, and coordinate their actions. This interaction can occur through various protocols and communication channels.
+- Goal-oriented: Agents in a multi-agent system are designed to achieve specific goals, which can be aligned with individual objectives or a common objective shared among the agents.
+- Distributed: Multi-agent systems operate in a distributed manner, with no single point of control. This distribution enhances the system's robustness, scalability, and resource efficiency.
+
+A multi-agent system provides the following advantages over a copilot or a single instance of LLM inference:
+
+- Dynamic reasoning: Compared to chain-of-thought or tree-of-thought prompting, multi-agent systems allow for dynamic navigation through various reasoning paths.
+- Sophisticated abilities: Multi-agent systems can handle complex or large-scale problems by conducting thorough decision-making processes and distributing tasks among multiple agents.
+- Enhanced memory: Multi-agent systems with memory can overcome large language models' context windows, enabling better understanding and information retention.
+
+## Implement AI agents
+
+### Reasoning and planning
+
+Complex reasoning and planning are the hallmark of advanced autonomous agents. Popular autonomous agent frameworks incorporate one or more of the following methodologies for reasoning and planning:
+
+[Self-ask](https://arxiv.org/abs/2210.03350)
+Improves on chain of thought by having the model explicitly asking itself (and answering) follow-up questions before answering the initial question.
+
+[Reason and Act (ReAct)](https://arxiv.org/abs/2210.03629)
+Use LLMs to generate both reasoning traces and task-specific actions in an interleaved manner. Reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information.
+
+[Plan and Solve](https://arxiv.org/abs/2305.04091)
+Devise a plan to divide the entire task into smaller subtasks, and then carry out the subtasks according to the plan. This mitigates the calculation errors, missing-step errors, and semantic misunderstanding errors that are often present in zero-shot chain-of-thought (CoT) prompting.
+
+[Reflection/Self-critique](https://arxiv.org/abs/2303.11366)
+Reflexion agents verbally reflect on task feedback signals, then maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials.
+
+### Frameworks
+
+Various frameworks and tools can facilitate the development and deployment of AI agents.
+
+For tool usage and perception that do not require sophisticated planning and memory, some popular LLM orchestrator frameworks are LangChain, LlamaIndex, Prompt Flow, and Semantic Kernel.
+
+For advanced and autonomous planning and execution workflows, [AutoGen](https://microsoft.github.io/autogen/) propelled the multi-agent wave that began in late 2022. OpenAI's [Assistants API](https://platform.openai.com/docs/assistants/overview) allow their users to create agents natively within the GPT ecosystem. [LangChain Agents](https://python.langchain.com/v0.1/docs/modules/agents/) and [LlamaIndex Agents](https://docs.llamaindex.ai/en/stable/use_cases/agents/) also emerged around the same time.
+
+> [!TIP]
+> See the implementation sample section at the end of this article for tutorial on building a simple multi-agent system using one of the popular frameworks and a unified agent memory system.
+
+### Agent memory system
+
+The prevalent practice for experimenting with AI-enhanced applications in 2022 through 2024 has been using standalone database management systems for various data workflows or types. For example, an in-memory database for caching, a relational database for operational data (including tracing/activity logs and LLM conversation history), and a [pure vector database](vector-database.md#integrated-vector-database-vs-pure-vector-database) for embedding management.
+
+However, this practice of using a complex web of standalone databases can hurt AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agents is a significant challenge in and of itself. Moreover, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems:
+
+**In-memory databases** are excellent for speed but may struggle with the large-scale data persistence that AI agents require.
+
+**Relational databases** are not ideal for the varied modalities and fluid schemas of data handled by agents. Moreover, relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding.
+
+**Pure vector databases** tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer
+- no guarantee on reads & writes
+- limited ingestion throughput
+- low availability (below 99.9%, or annualized outage of almost 9 hours or more)
+- one consistency level (eventual)
+- resource-intensive in-memory vector index
+- limited options for multitenancy
+- limited security
+
+The next section dives deeper into what makes a robust AI agent memory system.
+
+## Memory can make or break AI agents
+
+Just as efficient database management systems are critical to software applications' performances, it is critical to provide LLM-powered agents with relevant and useful information to guide their inference. Robust memory systems enable organizing and storing different kinds of information that the agents can retrieve at inference time.
+
+Currently, LLM-powered applications often use [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation) that uses basic semantic search or vector search to retrieve passages or documents. [Vector search](vector-database.md#vector-search) can be useful for finding general information, but it may not capture the specific context, structure, or relationships that are relevant for a particular task or domain.
+
+For example, if the task is to write code, vector search may not be able to retrieve the syntax tree, file system layout, code summaries, or API signatures that are important for generating coherent and correct code. Similarly, if the task is to work with tabular data, vector search may not be able to retrieve the schema, the foreign keys, the stored procedures, or the reports that are useful for querying or analyzing the data.
+
+Weaving together [a web of standalone in-memory, relational, and vector databases](#agent-memory-system) is not an optimal solution for the varied data types, either. This approach may work for prototypical agent systems; however, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
+
+Therefore, a robust memory system should have the following characteristics:
+
+#### Multi-modal (Part I)
+
+AI agent memory systems should provide different collections that store metadata, relationships, entities, summaries, or other types of information that can be useful for different tasks and domains. These collections can be based on the structure and format of the data, such as documents, tables, or code, or they can be based on the content and meaning of the data, such as concepts, associations, or procedural steps.
+
+#### Operational
+
+Memory systems should provide different memory banks that store information that is relevant for the interaction with the user and the environment. Such information may include chat history, user preferences, sensory data, decisions made, facts learned, or other operational data that are updated with high frequency and at high volumes. These memory banks can help the agents remember short-term and long-term information, avoid repeating or contradicting themselves, and maintain task coherence. These requirements must hold true even if the agents perform a multitude of unrelated tasks in succession. In advanced cases, agents may also wargame numerous branch plans that diverge or converge at different points.
+
+#### Sharable but also separable
+
+At the macro level, memory systems should enable multiple AI agents to collaborate on a problem or process different aspects of the problem by providing shared memory that is accessible to all the agents. Shared memory can facilitate the exchange of information and the coordination of actions among the agents. At the same time, the memory system must allow agents to preserve their own persona and characteristics, such as their unique collections of prompts and memories.
+
+#### Multi-modal (Part II)
+
+Not only are memory systems critical to AI agents; they are also important for the humans who develop, maintain, and use these agents. For example, humans may need to supervise agents' planning and execution workflows in near real-time. While supervising, humans may interject with guidance or make in-line edits of agents' dialogues or monologues. Humans may also need to audit the reasoning and actions of agents to verify the validity of the final output. Human-agent interactions are likely in natural or programming languages, while agents "think," "learn," and "remember" through embeddings. This data modal difference poses another requirement on memory systems' consistency across data modalities.
+
+## Infastructure for a robust memory system
+
+The above characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together [a plethora of disparate in-memory, relational, and vector databases](#agent-memory-system) may work for early-stage AI-enabled applications; however, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
+
+In place of all the standalone databases, Azure Cosmos DB can serve as a unified solution for AI agent memory systems. Its robustness successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it is the world's first globally distributed [NoSQL](distributed-nosql.md), [relational](distributed-relational.md), and [vector database](vector-database.md) service that offers a serverless mode. AI agents built on top of Azure Cosmos DB enjoy speed, scale, and simplicity.
+
+#### Speed
+
+Azure Cosmos DB provides single-digit millisecond latency, making it highly suitable for processes requiring rapid data access and management, including caching (traditional and semantic), transactions, and operational workloads. This low latency is crucial for AI agents that need to perform complex reasoning, make real-time decisions, and provide immediate responses. Moreover, its [use of state-of-the-art DiskANN algorithm](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) provides accurate and fast vector search with 95% less memory consumption.
+
+#### Scale
+
+Engineered for global distribution and horizontal scalability, and offering support for multi-region I/O and multitenancy, this service ensures that memory systems can expand seamlessly and keep up with rapidly growing agents and associated data. Its SLA-backed 99.999% availability guarantee (less than 5 minutes of downtime per year, contrasting 9 hours or more for pure vector database services) provides a solid foundation for mission-critical workloads. At the same time, its various service models like [Reserved Capacity](reserved-capacity.md) or Serverless drastically lower financial costs.
+
+#### Simplicity
+
+This service simplifies data management and architecture by integrating multiple database functionalities into a single, cohesive platform.
+
+Its integrated vector database capabilities can store, index, and query embeddings alongside the corresponding data in natural or programming languages, enabling greater data consistency, scale, and performance.
+
+Its flexibility easily supports the varied modalities and fluid schemas of the metadata, relationships, entities, summaries, chat history, user preferences, sensory data, decisions, facts learned, or other operational data involved in agent workflows. The database automatically indexes all data without requiring schema or index management, allowing AI agents to perform complex queries quickly and efficiently.
+
+Lastly, its fully managed service eliminates the overhead of database administration, including tasks such as scaling, patching, and backups. Thus, developers can focus on building and optimizing AI agents without worrying about the underlying data infrastructure.
+
+#### Advanced features
+
+Azure Cosmos DB incorporates advanced features such as change feed, which allows tracking and responding to changes in data in real-time. This capability is useful for AI agents that need to react to new information promptly.
+
+Additionally, the built-in support for multi-master writes enables high availability and resilience, ensuring continuous operation of AI agents even in the face of regional failures.
+
+The five available [consistency levels](consistency-levels.md) (from strong to eventual) can also cater to various distributed workloads depending on the scenario requirements.
+
+> [!TIP]
+> You may choose from two Azure Cosmos DB APIs to build your AI agent memory system: Azure Cosmos DB for NoSQL, and vCore-based Azure Cosmos DB for MongoDB. The former provides 99.999% availability and [three vector search algorithms](nosql/vector-search.md): IVF, HNSW, and the state-of-the-art DiskANN. The latter provides 99.995% availability and [two vector search algorithms](mongodb/vcore/vector-search.md): IVF and HNSW.
+
+> [!div class="nextstepaction"]
+> [Use the Azure Cosmos DB lifetime free tier](free-tier.md)
+
+## Implementation sample
+
+This section explores the implementation of an autonomous agent to process traveler inquiries and bookings in a CruiseLine travel application.
+
+Chatbots have been a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language, traditionally requiring coded logic. This AI travel agent uses the LangChain Agent framework for agent planning, tool usage, and perception. Its [unified memory system](#memory-can-make-or-break-ai-agents) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings, ensuring [speed, scale, and simplicity](#infastructure-for-a-robust-memory-system). It operates within a Python FastAPI backend and support user interactions through a React JS user interface.
+
+### Prerequisites
+
+- If you don't have an Azure subscription, you may [try Azure Cosmos DB free](try-free.md) for 30 days without creating an Azure account; no credit card is required, and no commitment follows when the trial period ends.
+- Set up account for OpenAI API or Azure OpenAI Service.
+- Create a vCore cluster in Azure Cosmos DB for MongoDB by following this [QuickStart](mongodb/vcore/quickstart-portal.md).
+- An IDE for Development, such as VS Code.
+- Python 3.11.4 installed on development environment.
+
+### Download the project
+
+All of the code and sample datasets are available on [GitHub](https://github.com/jonathanscholtes/Travel-AI-Agent-React-FastAPI-and-Cosmos-DB-Vector-Store). In this repository, you can find the following folders:
+
+- **loader**: This folder contains Python code for loading sample documents and vector embeddings in Azure Cosmos DB.
+- **api**: This folder contains Python FastAPI for Hosting Travel AI Agent.
+- **web**: The folder contains the Web Interface with React JS.
+
+### Load travel documents into Azure Cosmos DB
+
+The GitHub repository contains a Python project located in the **loader** directory intended for loading the sample travel documents into Azure Cosmos DB. This section sets up the project to load the documents.
+
+### Set up the environment for loader
+
+Set up your Python virtual environment in the **loader** directory by running the following:
+```python
+ python -m venv venv
+```
+
+Activate your environment and install dependencies in the **loader** directory:
+```python
+ venv\Scripts\activate
+ python -m pip install -r requirements.txt
+```
+
+Create a file, named **.env** in the **loader** directory, to store the following environment variables.
+```python
+ OPENAI_API_KEY="**Your Open AI Key**"
+ MONGO_CONNECTION_STRING="mongodb+srv:**your connection string from Azure Cosmos DB**"
+```
+
+### Load documents and vectors
+
+The Python file **main.py** serves as the central entry point for loading data into Azure Cosmos DB. This code processes the sample travel data from the GitHub repository, including information about ships and destinations. Additionally, it generates travel itinerary packages for each ship and destination, allowing travelers to book them using the AI agent. The CosmosDBLoader is responsible for creating collections, vector embeddings, and indexes in the Azure Cosmos DB instance.
+
+*main.py*
+```python
+from cosmosdbloader import CosmosDBLoader
+from itinerarybuilder import ItineraryBuilder
+import json
++
+cosmosdb_loader = CosmosDBLoader(DB_Name='travel')
+
+#read in ship data
+with open('documents/ships.json') as file:
+ ship_json = json.load(file)
+
+#read in destination data
+with open('documents/destinations.json') as file:
+ destinations_json = json.load(file)
+
+builder = ItineraryBuilder(ship_json['ships'],destinations_json['destinations'])
+
+# Create five itinerary pakages
+itinerary = builder.build(5)
+
+# Save itinerary packages to Cosmos DB
+cosmosdb_loader.load_data(itinerary,'itinerary')
+
+# Save destinations to Cosmos DB
+cosmosdb_loader.load_data(destinations_json['destinations'],'destinations')
+
+# Save ships to Cosmos DB, create vector store
+collection = cosmosdb_loader.load_vectors(ship_json['ships'],'ships')
+
+# Add text search index to ship name
+collection.create_index([('name', 'text')])
+```
+
+Load the documents, vectors and create indexes by simply executing the following command from the loader directory:
+```python
+ python main.py
+```
+
+Output:
+
+```markdown
+--build itinerary--
+--load itinerary--
+--load destinations--
+--load vectors ships--
+```
+
+### Build travel AI agent with Python FastAPI
+
+The AI travel agent is hosted in a backend API using Python FastAPI, facilitating integration with the frontend user interface. The API project processes agent requests by [grounding](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857) the LLM prompts against the data layer, specifically the vectors and documents in Azure Cosmos DB. Furthermore, the agent makes use of various tools, particularly the Python functions provided at the API service layer. This article focuses on the code necessary for AI agents within the API code.
+
+The API project in the GitHub repository is structured as follows:
+
+- Model ΓÇô data modeling components using Pydantic models.
+- Web ΓÇô web layer components responsible for routing requests and managing communication.
+- Service ΓÇô service layer components responsible for primary business logic and interaction with data layer; LangChain Agent and Agent Tools.
+- Data ΓÇô data layer components responsible for interacting with Azure Cosmos DB for MongoDB documents storage and vector search.
+
+### Set up the environment for the API
+
+Python version 3.11.4 was utilized for the development and testing of the API.
+
+Set up your python virtual environment in the **api** directory.
+```python
+ python -m venv venv
+```
+
+Activate your environment and install dependencies using the requirements file in the **api** directory:
+```python
+ venv\Scripts\activate
+ python -m pip install -r requirements.txt
+```
+
+Create a file, named **.env** in the **api** directory, to store your environment variables.
+```python
+ OPENAI_API_KEY="**Your Open AI Key**"
+ MONGO_CONNECTION_STRING="mongodb+srv:**your connection string from Azure Cosmos DB**"
+```
+
+With the environment configured and variables set up, we are ready to initiate the FastAPI server. Run the following command from the **api** directory to initiate the server.
+```python
+ python app.py
+```
+
+The FastAPI server launches on the localhost loopback 127.0.0.1 port 8000 by default. You can access the Swagger documents using the following localhost address: http://127.0.0.1:8000/docs
+
+### Use a session for the AI agent memory
+It is imperative for the Travel Agent to have the capability to reference previously provided information within the ongoing conversation. This ability is commonly known as "memory" in the context of LLMs, which should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
+
+To achieve this objective, we use the chat message history, which is securely stored in our Azure Cosmos DB instance. Each chat session will have its history stored using a session ID to ensure that only messages from the current conversation session are accessible. This necessity is the reason behind the existence of a 'Get Session' method in our API. It is a placeholder method for managing web sessions in order to illustrate the use of chat message history.
+
+Click Try It out for /session/.
+
+```python
+{
+ "session_id": "0505a645526f4d68a3603ef01efaab19"
+}
+```
+
+For the AI Agent, we only need to simulate a session. Thus, the stubbed-out method merely returns a generated session ID for tracking message history. In a practical implementation, this session would be stored in Azure Cosmos DB and potentially in React JS localStorage.
+
+*web/session.py*
+```python
+ @router.get("/")
+ def get_session():
+ return {'session_id':str(uuid.uuid4().hex)}
+```
+
+### Start a conversation with the AI travel agent
+
+Let us utilize the obtained session ID from the previous step to initiate a new dialogue with our AI agent to validate its functionality. We shall conduct our test by submitting the following phrase: "I want to take a relaxing vacation."
+
+Click Try It out for /agent/agent_chat.
+
+Example parameter
+```python
+{
+ "input": "I want to take a relaxing vacation.",
+ "session_id": "0505a645526f4d68a3603ef01efaab19"
+}
+```
+
+The initial execution results in a recommendation for the Tranquil Breeze Cruise and the Fantasy Seas Adventure Cruise as they are anticipated to be the most 'relaxing' cruises available through the vector search. These documents have the highest score for ```similarity_search_with_score``` that is called in the data layer of our API, ```data.mongodb.travel.similarity_search()```.
+
+The similarity search scores are displayed as output from the API for debugging purposes.
+
+Output when calling ```data.mongodb.travel.similarity_search()```
+
+```markdown
+0.8394561085977978
+0.8086545112328692
+2
+```
+
+> [!TIP]
+> If documents are not being returned for vector search modify the ```similarity_search_with_score``` limit or the score filter value as needed (```[doc for doc, score in docs if score >=.78]```). in ```data.mongodb.travel.similarity_search()```
+
+Calling the 'agent_chat' for the first time creates a new collection named 'history' in Azure Cosmos DB to store the conversation by session. This call enables the agent to access the stored chat message history as needed. Subsequent executions of 'agent_chat' with the same parameters produce varying results as it draws from memory.
+
+### Walkthrough of AI agent
+
+When integrating the AI Agent into the API, the web search components are responsible for initiating all requests. This is followed by the search service, and finally the data components. In our specific case, we utilize MongoDB data search, which connects to Azure Cosmos DB. The layers facilitate the exchange of Model components, with the AI Agent and AI Agent Tool code residing in the service layer. This approach was implemented to enable the seamless interchangeability of data sources and to extend the capabilities of the AI Agent with additional, more intricate functionalities or 'tools'.
++
+#### Service layer
+
+The service layer forms the cornerstone of our core business logic. In this particular scenario, the service layer plays a crucial role as the repository for the LangChain agent code, facilitating the seamless integration of user prompts with Azure Cosmos DB data, conversation memory, and agent functions for our AI Agent.
+
+The service layer employs a singleton pattern module for handling agent-related initializations in the **init.py** file.
+
+*service/init.py*
+```python
+from dotenv import load_dotenv
+from os import environ
+from langchain.globals import set_llm_cache
+from langchain_openai import ChatOpenAI
+from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory
+from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
+from langchain_core.runnables.history import RunnableWithMessageHistory
+from langchain.agents import AgentExecutor, create_openai_tools_agent
+from service import TravelAgentTools as agent_tools
+
+load_dotenv(override=True)
++
+chat : ChatOpenAI | None=None
+agent_with_chat_history : RunnableWithMessageHistory | None=None
+
+def LLM_init():
+ global chat,agent_with_chat_history
+ chat = ChatOpenAI(model_name="gpt-3.5-turbo-16k",temperature=0)
+ tools = [agent_tools.vacation_lookup, agent_tools.itinerary_lookup, agent_tools.book_cruise ]
+
+ prompt = ChatPromptTemplate.from_messages(
+ [
+ (
+ "system",
+ "You are a helpful and friendly travel assistant for a cruise company. Answer travel questions to the best of your ability providing only relevant information. In order to book a cruise you will need to capture the person's name.",
+ ),
+ MessagesPlaceholder(variable_name="chat_history"),
+ ("user", "Answer should be embedded in html tags. {input}"),
+ MessagesPlaceholder(variable_name="agent_scratchpad"),
+ ]
+ )
+
+ #Answer should be embedded in html tags. Only answer questions related to cruise travel, If you can not answer respond with \"I am here to assist with your travel questions.\".
++
+ agent = create_openai_tools_agent(chat, tools, prompt)
+ agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
+
+ agent_with_chat_history = RunnableWithMessageHistory(
+ agent_executor,
+ lambda session_id: MongoDBChatMessageHistory( database_name="travel",
+ collection_name="history",
+ connection_string=environ.get("MONGO_CONNECTION_STRING"),
+ session_id=session_id),
+ input_messages_key="input",
+ history_messages_key="chat_history",
+)
+
+LLM_init()
+```
+
+The **init.py** file commences by initiating the loading of environment variables from a **.env** file utilizing the ```load_dotenv(override=True)``` method. Then, a global variable named ```agent_with_chat_history``` is instantiated for the agent, intended for use by our **TravelAgent.py**. The ```LLM_init()``` method is invoked during module initialization to configure our AI agent for conversation via the API web layer. The OpenAI Chat object is instantiated using the GPT-3.5 model, incorporating specific parameters such as model name and temperature. The chat object, tools list, and prompt template are combined to generate an ```AgentExecutor```, which operates as our AI Travel Agent. Lastly, the agent with history, ```agent_with_chat_history```, is established using ```RunnableWithMessageHistory``` with chat history (MongoDBChatMessageHistory), enabling it to maintain a complete conversation history via Azure Cosmos DB.
+
+#### Prompt
+
+The LLM prompt initially began with the simple statement "You are a helpful and friendly travel assistant for a cruise company." However, through testing, it was determined that more consistent results could be obtained by including the instruction "Answer travel questions to the best of your ability, providing only relevant information. To book a cruise, capturing the person's name is essential." The results are presented in HTML format to enhance the visual appeal within the web interface.
+
+#### Agent tools
+[Tools](#what-are-ai-agents) are interfaces that an agent can use to interact with the world, often done through function calling.
+
+When creating an agent, it is essential to furnish it with a set of tools that it can utilize. The ```@tool``` decorator offers the most straightforward approach to defining a custom tool. By default, the decorator uses the function name as the tool name, although this can be replaced by providing a string as the first argument. Moreover, the decorator will utilize the function's docstring as the tool's description, thus requiring the provision of a docstring.
+
+*service/TravelAgentTools.py*
+```python
+from langchain_core.tools import tool
+from langchain.docstore.document import Document
+from data.mongodb import travel
+from model.travel import Ship
++
+@tool
+def vacation_lookup(input:str) -> list[Document]:
+ """find information on vacations and trips"""
+ ships: list[Ship] = travel.similarity_search(input)
+ content = ""
+
+ for ship in ships:
+ content += f" Cruise ship {ship.name} description: {ship.description} with amenities {'/n-'.join(ship.amenities)} "
+
+ return content
+
+@tool
+def itinerary_lookup(ship_name:str) -> str:
+ """find ship itinerary, cruise packages and destinations by ship name"""
+ it = travel.itnerary_search(ship_name)
+ results = ""
+
+ for i in it:
+ results += f" Cruise Package {i.Name} room prices: {'/n-'.join(i.Rooms)} schedule: {'/n-'.join(i.Schedule)}"
+
+ return results
++
+@tool
+def book_cruise(package_name:str, passenger_name:str, room: str )-> str:
+ """book cruise using package name and passenger name and room """
+ print(f"Package: {package_name} passenger: {passenger_name} room: {room}")
+
+ # LLM defaults empty name to John Doe
+ if passenger_name == "John Doe":
+ return "In order to book a cruise I need to know your name."
+ else:
+ if room == '':
+ return "which room would you like to book"
+ return "Cruise has been booked, ref number is 343242"
+```
+
+In the **TravelAgentTools.py** file, three specific tools are defined. The first tool, ```vacation_lookup```, conducts a vector search against Azure Cosmos DB, using a ```similarity_search``` to retrieve relevant travel-related material. The second tool, ```itinerary_lookup```, retrieves cruise package details and schedules for a specified cruise ship. Lastly, ```book_cruise``` is responsible for booking a cruise package for a passenger. Specific instructions ("In order to book a cruise I need to know your name.") might be necessary to ensure the capture of the passenger's name and room number for booking the cruise package. This is in spite of including such instructions in the LLM prompt.
+
+#### AI agent
+
+The fundamental concept underlying agents is to utilize a language model for selecting a sequence of actions to execute.
+
+*service/TravelAgent.py*
+```python
+from .init import agent_with_chat_history
+from model.prompt import PromptResponse
+import time
+from dotenv import load_dotenv
+
+load_dotenv(override=True)
++
+def agent_chat(input:str, session_id:str)->str:
+
+ start_time = time.time()
+
+ results=agent_with_chat_history.invoke(
+ {"input": input},
+ config={"configurable": {"session_id": session_id}},
+ )
+
+ return PromptResponse(text=results["output"],ResponseSeconds=(time.time() - start_time))
+```
+
+The **TravelAgent.py** file is straightforward, as ```agent_with_chat_history```, and its dependencies (tools, prompt, and LLM) are initialized and configured in the **init.py** file. In this file, the agent is called using the input received from the user, along with the session ID for conversation memory. Afterwards, ```PromptResponse``` (model/prompt) is returned with the agent's output and response time.
+
+### Integrate AI agent with React JS user interface
+
+With the successful loading of the data and accessibility of our AI Agent through our API, we can now complete the solution by establishing a web user interface using React JS for our travel website. By harnessing the capabilities of React JS, we can illustrate the seamless integration of our AI agent into a travel site, enhancing the user experience with a conversational travel assistant for inquiries and bookings.
+
+#### Set up the environment for React JS
+
+Install Node.js and the dependencies before testing out the React interface.
+
+Run the following command from the **web** directory to perform a clean install of project dependencies, this may take some time.
+```javascript
+ npm ci
+```
+
+Next, it is essential to create a file named **.env** within the **web** directory to facilitate the storage of environment variables. Then, you should include the following details in the newly created **.env** file.
+
+REACT_APP_API_HOST=http://127.0.0.1:8000
+
+Now, we have the ability to execute the following command from the **web** directory to initiate the React web user interface.
+```javascript
+ npm start
+```
+
+Running the previous command launches the React JS web application.
+
+#### Walkthrough of React JS Web interface
+
+The web project of the GitHub repository is a straightforward application to facilitate user interaction with our AI agent. The primary components required to converse with the agent are ```TravelAgent.js``` and ```ChatLayout.js```. The **Main.js** file serves as the central module or user landing page.
++
+#### Main
+
+The Main component serves as the central manager of the application, acting as the designated entry point for routing. Within the render function, it produces JSX code to delineate the main page layout. This layout encompasses placeholder elements for the application such as logos and links, a section housing the travel agent component (further details to come), and a footer containing a sample disclaimer regarding the application's nature.
+
+*main.js*
+```javascript
+ import React, { Component } from 'react'
+import { Stack, Link, Paper } from '@mui/material'
+import TravelAgent from './TripPlanning/TravelAgent'
+
+import './Main.css'
+
+class Main extends Component {
+ constructor() {
+ super()
+
+ }
+
+ render() {
+ return (
+ <div className="Main">
+ <div className="Main-Header">
+ <Stack direction="row" spacing={5}>
+ <img src="/mainlogo.png" alt="Logo" height={'120px'} />
+ <Link
+ href="#"
+ sx={{ color: 'white', fontWeight: 'bold', fontSize: 18 }}
+ underline="hover"
+ >
+ Ships
+ </Link>
+ <Link
+ href="#"
+ sx={{ color: 'white', fontWeight: 'bold', fontSize: 18 }}
+ underline="hover"
+ >
+ Destinations
+ </Link>
+ </Stack>
+ </div>
+ <div className="Main-Body">
+ <div className="Main-Content">
+ <Paper elevation={3} sx={{p:1}} >
+ <Stack
+ direction="row"
+ justifyContent="space-evenly"
+ alignItems="center"
+ spacing={2}
+ >
+
+ <Link href="#">
+ <img
+ src={require('./images/destinations.png')} width={'400px'} />
+ </Link>
+ <TravelAgent ></TravelAgent>
+ <Link href="#">
+ <img
+ src={require('./images/ships.png')} width={'400px'} />
+ </Link>
+
+ </Stack>
+ </Paper>
+ </div>
+ </div>
+ <div className="Main-Footer">
+ <b>Disclaimer: Sample Application</b>
+ <br />
+ Please note that this sample application is provided for demonstration
+ purposes only and should not be used in production environments
+ without proper validation and testing.
+ </div>
+ </div>
+ )
+ }
+}
+
+export default Main
+```
+
+#### Travel agent
+
+The Travel Agent component has a straightforward purpose ΓÇô capturing user inputs and displaying responses. It plays a key role in managing the integration with the backend AI Agent, primarily by capturing sessions and forwarding user prompts to our FastAPI service. The resulting responses are stored in an array for display, facilitated by the Chat Layout component.
+
+*TripPlanning/TravelAgent.js*
+```javascript
+import React, { useState, useEffect } from 'react'
+import { Button, Box, Link, Stack, TextField } from '@mui/material'
+import SendIcon from '@mui/icons-material/Send'
+import { Dialog, DialogContent } from '@mui/material'
+import ChatLayout from './ChatLayout'
+import './TravelAgent.css'
+
+export default function TravelAgent() {
+ const [open, setOpen] = React.useState(false)
+ const [session, setSession] = useState('')
+ const [chatPrompt, setChatPrompt] = useState(
+ 'I want to take a relaxing vacation.',
+ )
+ const [message, setMessage] = useState([
+ {
+ message: 'Hello, how can I assist you today?',
+ direction: 'left',
+ bg: '#E7FAEC',
+ },
+ ])
+
+ const handlePrompt = (prompt) => {
+ setChatPrompt('')
+ setMessage((message) => [
+ ...message,
+ { message: prompt, direction: 'right', bg: '#E7F4FA' },
+ ])
+ console.log(session)
+ fetch(process.env.REACT_APP_API_HOST + '/agent/agent_chat', {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ },
+ body: JSON.stringify({ input: prompt, session_id: session }),
+ })
+ .then((response) => response.json())
+ .then((res) => {
+ setMessage((message) => [
+ ...message,
+ { message: res.text, direction: 'left', bg: '#E7FAEC' },
+ ])
+ })
+ }
+
+ const handleSession = () => {
+ fetch(process.env.REACT_APP_API_HOST + '/session/')
+ .then((response) => response.json())
+ .then((res) => {
+ setSession(res.session_id)
+ })
+ }
+
+ const handleClickOpen = () => {
+ setOpen(true)
+ }
+
+ const handleClose = (value) => {
+ setOpen(false)
+ }
+
+ useEffect(() => {
+ if (session === '') handleSession()
+ }, [])
+
+ return (
+ <Box>
+ <Dialog onClose={handleClose} open={open} maxWidth="md" fullWidth="true">
+ <DialogContent>
+ <Stack>
+ <Box sx={{ height: '500px' }}>
+ <div className="AgentArea">
+ <ChatLayout messages={message} />
+ </div>
+ </Box>
+ <Stack direction="row" spacing={0}>
+ <TextField
+ sx={{ width: '80%' }}
+ variant="outlined"
+ label="Message"
+ helperText="Chat with AI Travel Agent"
+ defaultValue="I want to take a relaxing vacation."
+ value={chatPrompt}
+ onChange={(event) => setChatPrompt(event.target.value)}
+ ></TextField>
+ <Button
+ variant="contained"
+ endIcon={<SendIcon />}
+ sx={{ mb: 3, ml: 3, mt: 1 }}
+ onClick={(event) => handlePrompt(chatPrompt)}
+ >
+ Submit
+ </Button>
+ </Stack>
+ </Stack>
+ </DialogContent>
+ </Dialog>
+ <Link href="#" onClick={() => handleClickOpen()}>
+ <img src={require('.././images/planvoyage.png')} width={'400px'} />
+ </Link>
+ </Box>
+ )
+}
+```
+
+Click on "Effortlessly plan your voyage" to launch the travel assistant.
+
+#### Chat layout
+
+The Chat Layout component, as indicated by its name, oversees the arrangement of the chat. It systematically processes the chat messages and implements the designated formatting specified in the message JSON object.
+
+*TripPlanning/ChatLayout.py*
+```javascript
+import React from 'react'
+import { Box, Stack } from '@mui/material'
+import parse from 'html-react-parser'
+import './ChatLayout.css'
+
+export default function ChatLayout(messages) {
+ return (
+ <Stack direction="column" spacing="1">
+ {messages.messages.map((obj, i = 0) => (
+ <div className="bubbleContainer" key={i}>
+ <Box
+ key={i++}
+ className="bubble"
+ sx={{ float: obj.direction, fontSize: '10pt', background: obj.bg }}
+ >
+ <div>{parse(obj.message)}</div>
+ </Box>
+ </div>
+ ))}
+ </Stack>
+ )
+}
+```
+
+User prompts are on the right side and colored blue, while the Travel AI Agent responses are on the left side and colored green. As you can see in the image below, the HTML formatted responses are accounted for in the conversation.
+
+When your AI agent is ready go to into production, you can use semantic caching to improve query performance by 80% and reduce LLM inference/API call costs. See this blog post for how to implement [semantic caching](https://stochasticcoder.com/2024/03/22/improve-llm-performance-using-semantic-cache-with-cosmos-db/).
+
+> [!NOTE]
+> If you would like to contribute to this article, feel free to click on the pencil button on the top right corner of the article. If you have any specific questions or comments on this article, you may reach out to cosmosdbgenai@microsoft.com
+
+### Next steps
+
+[30-day Free Trial without Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
+
+[90-day Free Trial and up to $6,000 in throughput credits with Azure AI Advantage](ai-advantage.md)
+
+> [!div class="nextstepaction"]
+> [Use the Azure Cosmos DB lifetime free tier](free-tier.md)
cosmos-db Manage Data Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet-core.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
## Prerequisites In addition, you need: * Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
cosmos-db Manage Data Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
## Prerequisites In addition, you need: * Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
cosmos-db Manage Data Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-nodejs.md
In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account,
## Prerequisites In addition, you need:
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium Post
### Set up PostgreSQL database if you haven't already. This could be an existing on-premises database or you could [download and install one](https://www.postgresql.org/download/) on your local machine. It's also possible to use a [Docker container](https://hub.docker.com/_/postgres). To start a container:
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
The following graphic illustrates the strong consistency with musical notes. Aft
:::image type="content" source="media/consistency-levels/strong-consistency.gif" alt-text="Animation of strong consistency level using musical notes that are always synced.":::
+#### Dynamic quorum
+
+Under normal circumstances, for an account with strong consistency, a write is considered committed when all regions acknowledge that the record has been replicated to it. However, for accounts with 3 regions or more (including the write region), the system can "downshift" the quorum of regions to a global majority in cases where some regions are either unresponsive or responding slowly. At that point, unresponsive regions are taken out of the quorum set of regions in order to preserve strong consistency. They will only be added back once they are consistent with other regions and are performing as expected. The number of regions that can potentially be taken out of the quorum set will depend on the total number of regions. For example, in a 3 or 4 region account, the majority is 2 or 3 regions respectively, so only 1 region can be removed in either case. For a 5 region account, the majority is 3, so up to 2 unresponsive regions can be removed. This capability is known as "dynamic quorum" and can improve both write availability and replication latency for accounts with 3 or more regions.
+
+> [!NOTE]
+> When regions are removed from the quorum set as part of dynamic quorum, those regions are no longer able to serve reads until re-added into the quorum.
+ ### Bounded staleness consistency For single-region write accounts with two or more regions, data is replicated from the primary region to all secondary (read-only) regions. For multi-region write accounts with two or more regions, data is replicated from the region it was originally written in to all other writable regions. In both scenarios, while not common, there may occasionally be a replication lag from one region to another.
cosmos-db Quickstart Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-console.md
In this quickstart, you use the Gremlin console to connect to a newly created Az
- Don't have Docker installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1). - [Azure Command-Line Interface (CLI)](/cli/azure/) ## Create an API for Gremlin account and relevant resources
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-dotnet.md
In this quickstart, you use the `Gremlin.Net` library to connect to a newly crea
- Don't have .NET installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1). - [Azure Command-Line Interface (CLI)](/cli/azure/) ## Setting up
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-nodejs.md
In this quickstart, you use the `gremlin` library to connect to a newly created
- Don't have Node.js installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1).codespaces.new/github/codespaces-blank?quickstart=1) - [Azure Command-Line Interface (CLI)](/cli/azure/) ## Setting up
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-python.md
In this quickstart, you use the `gremlinpython` library to connect to a newly cr
- Don't have Python installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1). - [Azure Command-Line Interface (CLI)](/cli/azure/) ## Setting up
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
To ensure that you have access to Azure Cosmos DB metrics from the portal, you n
## <a id="configure-using-powershell"></a>Configure a service endpoint by using Azure PowerShell Use the following steps to configure a service endpoint to an Azure Cosmos DB account by using Azure PowerShell:
cosmos-db How To Setup Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys.md
Data stored in your Azure Cosmos DB account is automatically and seamlessly encr
This article walks through how to configure encryption with customer-managed keys at the time that you create an Azure Cosmos DB account. In this example cross-tenant scenario, the Azure Cosmos DB account resides in a tenant managed by an Independent Software Vendor (ISV) referred to as the service provider. The key used for encryption of the Azure Cosmos DB account resides in a key vault in a different tenant that is managed by the customer. ## Create a new Azure Cosmos DB account encrypted with a key from a different tenant
cosmos-db Connect Using Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-mongoose.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
## Prerequisites [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cosmos-db Tutorial Develop Nodejs Part 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-4.md
Before starting this part of the tutorial, ensure you've completed the steps in
In this tutorial section, you can either use the Azure Cloud Shell (in your internet browser) or [the Azure CLI](/cli/azure/install-azure-cli) installed locally. [!INCLUDE [Log in to Azure](../includes/login-to-azure.md)] > [!TIP] > This tutorial walks you through the steps to build the application step-by-step. If you want to download the finished project, you can get the completed application from the [angular-cosmosdb repo](https://github.com/Azure-Samples/angular-cosmosdb) on GitHub.
cosmos-db How To Create Wildcard Indexes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-create-wildcard-indexes.md
+
+ Title: Wildcard indexes in Azure Cosmos DB for MongoDB vCore
+
+description: Sample to create wildcard indexes in Azure Cosmos DB for MongoDB vCore.
++++++ Last updated : 6/25/2024+++
+# Create wildcard indexes in Azure Cosmos DB for MongoDB vCore
++
+While most workloads have a predictable set of fields used in query filters and predicates, adhoc query patterns may use filters on any field in the json document structure.
+
+Wildcard indexing can be helpful in the following scenarios:
+- Queries filtering on any field in the document making indexing all fields through a single command easier than indexing each field individually.
+- Queries filtering on most fields in the document making indexing all but a few fields through a single easier than indexing most fields individually.
+
+This sample describes a simple workaround to minimize the effort needed to create individual indexes until wildcard indexing is generally available in Azure Cosmos DB for MongoDB vCore.
+
+## Solution
+Consider the json document below:
+```json
+{
+ "firstName": "Steve",
+ "lastName": "Smith",
+ "companyName": "Microsoft",
+ "division": "Azure",
+ "subDivision": "Data & AI",
+ "timeInOrgInYears": 7,
+ "roles": [
+ {
+ "teamName" : "Windows",
+ "teamSubName" "Operating Systems",
+ "timeInTeamInYears": 3
+ },
+ {
+ "teamName" : "Devices",
+ "teamSubName" "Surface",
+ "timeInTeamInYears": 2
+ },
+ {
+ "teamName" : "Devices",
+ "teamSubName" "Surface",
+ "timeInTeamInYears": 2
+ }
+ ]
+}
+```
+
+The following indices are created under the covers when wildcard indexing is used.
+- db.collection.createIndex({"firstName", 1})
+- db.collection.createIndex({"lastName", 1})
+- db.collection.createIndex({"companyName", 1})
+- db.collection.createIndex({"division", 1})
+- db.collection.createIndex({"subDivision", 1})
+- db.collection.createIndex({"timeInOrgInYears", 1})
+- db.collection.createIndex({"subDivision", 1})
+- db.collection.createIndex({"roles.teamName", 1})
+- db.collection.createIndex({"roles.teamSubName", 1})
+- db.collection.createIndex({"roles.timeInTeamInYears", 1})
+
+While this sample document only requires a combination of 10 fields to be explicitly indexed, larger documents with hundreds or thousands of fields can get tedious and error prone when indexing fields individually.
+
+The jar file detailed in the rest of this document makes indexing fields in larger documents simpler. The jar takes a sample JSON document as input, parses the document and executes createIndex commands for each field without the need for user intervention.
+
+## Prerequisites
+
+### Java 21
+After the virtual machine is deployed, use SSH to connect to the machine, and install CQLSH using the below commands:
+
+```bash
+# Install default-jdk
+sudo apt update
+sudo apt install openjdk-21-jdk
+```
+
+## Sample jar to create individual indexes for all fields
+
+Clone the repository containing the Java sample to iterate through each field in the JSON document's structure and issue createIndex operations for each field in the document.
+
+```bash
+git clone https://github.com/Azure-Samples/cosmosdb-mongodb-vcore-wildcard-indexing.git
+```
+
+The cloned repository does not need to be built if there are no changes to be made to the solution. The built runnable jar named azure-cosmosdb-mongo-data-indexer-1.0-SNAPSHOT.jar is already included in the runnableJar/ folder. The jar can be executed by specifying the following required parameters:
+- Azure Cosmos DB for MongoDB vCore cluster connection string with the username and password used when the cluster was provisioned
+- The Azure Cosmos DB for MongoDB vCore database
+- The collection to be indexed
+- The location of the json file with the document structure for the collection. This document is parsed by the jar file to extract every field and issue individual createIndex operations.
+
+```bash
+java -jar azure-cosmosdb-mongo-data-indexer-1.0-SNAPSHOT.jar mongodb+srv://<user>:<password>@abinav-test-benchmarking.global.mongocluster.cosmos.azure.com/?tls=true&authMechanism=SCRAM-SHA-256&retrywrites=false&maxIdleTimeMS=120000 cosmicworks employee sampleEmployee.json
+```
+
+## Track the status of a createIndex operation
+The jar file is designed to not wait on a response from each createIndex operation. The indexes are created asynchronously on the server and the progress of the index build operation on the cluster can be tracked.
+
+Consider this sample to track indexing progress on the 'cosmicworks' database.
+```javascript
+use cosmicworks;
+db.currentOp()
+```
+
+When a createIndex operation is in progress, the response looks like:
+```json
+{
+ "inprog": [
+ {
+ "shard": "defaultShard",
+ "active": true,
+ "type": "op",
+ "opid": "30000451493:1719209762286363",
+ "op_prefix": 30000451493,
+ "currentOpTime": "2024-06-24T06:16:02.000Z",
+ "secs_running": 0,
+ "command": { "aggregate": "" },
+ "op": "command",
+ "waitingForLock": false
+ },
+ {
+ "shard": "defaultShard",
+ "active": true,
+ "type": "op",
+ "opid": "30000451876:1719209638351743",
+ "op_prefix": 30000451876,
+ "currentOpTime": "2024-06-24T06:13:58.000Z",
+ "secs_running": 124,
+ "command": { "createIndexes": "" },
+ "op": "workerCommand",
+ "waitingForLock": false,
+ "progress": {},
+ "msg": ""
+ }
+ ],
+ "ok": 1
+}
+```
+
+## Related content
+
+Check out the full sample here - https://github.com/Azure-Samples/cosmosdb-mongodb-vcore-wildcard-indexing
+
+Check out [indexing best practices](how-to-create-indexes.md), which details best practices for indexing on Azure Cosmos DB for MongoDB vCore.
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Title: Monitor data by using Azure Diagnostic settings
+ Title: Monitor data using diagnostic settings
description: Learn how to use Azure diagnostic settings to monitor the performance and availability of data stored in Azure Cosmos DB---++ Previously updated : 04/26/2023 Last updated : 06/27/2024
+#Customer Intent: As an operations user, I want to monitor metrics using Azure Monitor, so that I can use a Log Analytics workspace to perform complex analysis.
-# Monitor Azure Cosmos DB data by using diagnostic settings in Azure
+# Monitor Azure Cosmos DB data using Azure Monitor Log Analytics diagnostic settings
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as "data plane logs." Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
+Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and are referred to as "data plane logs." Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
Platform metrics and the Activity logs are collected automatically, whereas you must create a diagnostic setting to collect resource logs or forward them outside of Azure Monitor. You can turn on diagnostic setting for Azure Cosmos DB accounts and send resource logs to the following sources: -- Log Analytics workspaces
+- Azure Monitor Log Analytics workspaces
- Data sent to Log Analytics can be written into **Azure Diagnostics (legacy)** or **Resource-specific (preview)** tables - Event hub - Storage Account
Platform metrics and the Activity logs are collected automatically, whereas you
- If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal). - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+- An existing Azure Monitor Log Analytics workspace.
## Create diagnostic settings
Here, we walk through the process of creating diagnostic settings for your accou
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your Azure Cosmos DB account. Open the **Diagnostic settings** pane under the **Monitoring section** and then select the **Add diagnostic setting** option.
+1. Navigate to your existing Azure Cosmos DB account.
+1. In the **Monitoring** section of the resource menu, select **Diagnostic settings**. Then, select the **Add diagnostic setting** option.
- :::image type="content" source="media/monitor/diagnostics-settings-selection.png" lightbox="media/monitor/diagnostics-settings-selection.png" alt-text="Sreenshot of the diagnostics selection page.":::
+ :::image type="content" source="media/monitor-resource-logs/add-diagnostic-setting.png" lightbox="media/monitor-resource-logs/add-diagnostic-setting.png" alt-text="Screenshot of the list of diagnostic settings with options to create new ones or edit existing ones.":::
> [!IMPORTANT] > You might see a prompt to "enable full-text query \[...\] for more detailed logging" if the **full-text query** feature is not enabled in your account. You can safely ignore this warning if you do not wish to enable this feature. For more information, see [enable full-text query](monitor-resource-logs.md#enable-full-text-query-for-logging-query-text).
-1. In the **Diagnostic settings** pane, fill the form with your preferred categories. Included here's a list of log categories.
+1. In the **Diagnostic settings** pane, name the setting **example-setting** and then select the **QueryRuntimeStatistics** category. Send the logs to a **Log Analytics Workspace** selecting your existing workspace. Finally, select **Resource specific** as the destination option.
- | Category | API | Definition | Key Properties |
- | | | | |
- | **DataPlaneRequests** | Recommended for API for NoSQL | Logs back-end requests as data plane operations, which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
- | **MongoRequests** | API for MongoDB | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- | **CassandraRequests** | API for Apache Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. | `operationName`, `requestCharge`, `piiCommandText` |
- | **GremlinRequests** | API for Apache Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
- | **QueryRuntimeStatistics** | API for NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging persona l data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
- | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size might not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
- | **PartitionKeyRUConsumption** | API for NoSQL or API for Apache Gremlin | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write, query, and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
- | **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
- | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table.| `operationName`, `requestCharge`, `piiCommandText` |
-
-1. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
-
- :::image type="content" source="media/monitor/diagnostics-resource-specific.png" alt-text="Screenshot of the option to enable resource-specific diagnostics.":::
+ :::image type="content" source="media/monitor-resource-logs/configure-diagnostic-setting.png" alt-text="Screenshot of the various options to configure a diagnostic setting.":::
### [Azure CLI](#tab/azure-cli)
-Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with the Azure CLI. See the documentation for this command for descriptions of its parameters.
-
-> [!NOTE]
-> If you are using API for NoSQL, we recommend setting the **export-to-resource-specific** property to **true**.
-
-1. Create shell variables for `subscriptionId`, `diagnosticSettingName`, `workspaceName` and