Updates from: 06/28/2024 01:20:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Manage User Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-user-data.md
This article discusses how you can manage the user data in Azure Active Directory B2C (Azure AD B2C) by using the operations that are provided by the [Microsoft Graph API](/graph/use-the-api). Managing user data includes deleting or exporting data from audit logs. ## Delete user data
active-directory-b2c Quickstart Native App Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-native-app-desktop.md
Azure Active Directory B2C (Azure AD B2C) provides cloud identity management to keep your application, business, and customers protected. Azure AD B2C enables your applications to authenticate to social accounts and enterprise accounts using open standard protocols. In this quickstart, you use a Windows Presentation Foundation (WPF) desktop application to sign in using a social identity provider and call an Azure AD B2C protected web API. ## Prerequisites
advisor Advisor Alerts Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-arm.md
Last updated 06/29/2020
This article shows you how to set up an alert for new recommendations from Azure Advisor using an Azure Resource Manager template (ARM template). Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally a resource group to specify the resources that you want to receive alerts on.
advisor Advisor Alerts Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-alerts-bicep.md
Last updated 04/26/2022
This article shows you how to set up an alert for new recommendations from Azure Advisor using Bicep. Whenever Azure Advisor detects a new recommendation for one of your resources, an event is stored in [Azure Activity log](../azure-monitor/essentials/platform-logs-overview.md). You can set up alerts for these events from Azure Advisor using a recommendation-specific alerts creation experience. You can select a subscription and optionally select a resource group to specify the resources that you want to receive alerts on.
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
description: Full list of available performance recommendations in Advisor.
Previously updated : 3/22/2024 Last updated : 6/24/2024 # Performance recommendations
With the new Ev5 compute hardware, you can boost workload performance by 30% wit
Learn more about [Azure Database for MySQL flexible server - OrcasMeruMySqlComputeSeriesUpgradeEv5 (Boost your workload performance by 30% with the new Ev5 compute hardware)](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/boost-azure-mysql-business-critical-flexible-server-performance/ba-p/3603698).
+### Increase the storage limit for Hyperscale (Citus) server group
-### Scale the storage limit for PostgreSQL server
-
-Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits).
-
-### Scale the PostgreSQL server to higher SKU
-
-Our system shows that the server might be unable to support the connection requests because of the maximum supported connections for the given SKU, which might result in a large number of failed connections requests adversely affecting performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlConcurrentConnection (Scale the PostgreSQL server to higher SKU)](https://aka.ms/postgresqlconnectionlimits).
-
-### Move your PostgreSQL server to Memory Optimized SKU
-
-Our system shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlMemoryCache (Move your PostgreSQL server to Memory Optimized SKU)](https://aka.ms/postgresqlpricing).
-
-### Add a PostgreSQL Read Replica server
-
-Our system shows that you might have a read intensive workload running, which results in resource contention for this server. Resource contention can lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+Our system shows that one or more nodes in the server group might be constrained because they are approaching limits for the currently provisioned storage values. This might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
-Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](https://aka.ms/postgresqlreadreplica).
+Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscaleCitus (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes).
### Increase the PostgreSQL server vCores
-Our system shows that the CPU has been running under high utilization for an extended time period over the last seven days. High CPU utilization might lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlCpuOverload (Increase the PostgreSQL server vCores)](https://aka.ms/postgresqlpricing).
-
-### Improve PostgreSQL connection management
-
-Our system shows that your PostgreSQL server might not be managing connections efficiently, which can result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections by configuring a server side connection-pooler, such as PgBouncer.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlConnectionPooling (Improve PostgreSQL connection management)](https://aka.ms/azure_postgresql_connection_pooling).
-
-### Improve PostgreSQL log performance
-
-Our system shows that your PostgreSQL server has been configured to output VERBOSE error logs. This setting can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend that you change the log_error_verbosity parameter to the DEFAULT setting.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlLogErrorVerbosity (Improve PostgreSQL log performance)](https://aka.ms/azure_postgresql_log_settings).
-
-### Optimize query statistics collection on an Azure Database for PostgreSQL
-
-Our system shows that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](https://aka.ms/azure_postgresql_optimize_query_stats).
-
-### Optimize query store on an Azure Database for PostgreSQL when not troubleshooting
-
-Our system shows that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE.
-
-Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](https://aka.ms/azure_postgresql_query_store).
-
-### Increase the storage limit for PostgreSQL Flexible Server
-
-Our system shows that the server might be constrained because it is approaching limits for the currently provisioned storage values. Approaching the storage limits might result in degraded performance or in the server being moved to read-only mode.
+Over 7 days, CPU usage was at least one of the following: Above 90% for 2 or more hours, above 50% for 50% of the time, at max usage for 20% of the time. High CPU utilization can lead to slow query performance. To improve performance, we recommend moving your server to a larger SKU with higher compute.
+Learn more about [Azure Database for PostgreSQL flexible server - Upscale Server SKU for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-compute).
-Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](https://aka.ms/azure_postgresql_flexible_server_limits).
-
-#### Optimize logging settings by setting LoggingCollector to -1
-
-Optimize logging settings by setting LoggingCollector to -1
-
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
-
-#### Optimize logging settings by setting LogDuration to OFF
-
-Optimize logging settings by setting LogDuration to OFF
-
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
-
-#### Optimize logging settings by setting LogStatement to NONE
-
-Optimize logging settings by setting LogStatement to NONE
-
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
-
-#### Optimize logging settings by setting ReplaceParameter to OFF
-
-Optimize logging settings by setting ReplaceParameter to OFF
-
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
-
-#### Optimize logging settings by setting LoggingCollector to OFF
+### Optimize log_statement settings for PostgreSQL on Azure Database
-Optimize logging settings by setting LoggingCollector to OFF
+Our system shows that you have log_statement enabled, for better performance set it to NONE
-Learn more [Logs in Azure Database for PostgreSQL - Single Server] (/azurepostgresql/single-server/concepts-server-logs#configure-logging).
+Learn more about [Azure Database for PostgreSQL flexible server - Optimize log_statement settings for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-logging.md).
-### Increase the storage limit for Hyperscale (Citus) server group
+### Optimize log_duration settings for PostgreSQL on Azure Database
-Our system shows that one or more nodes in the server group might be constrained because they are approaching limits for the currently provisioned storage values. This might result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
+You may experience potential performance degradation due to logging settings. To optimize these settings, set the log_duration server parameter to OFF.
-Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscaleCitus (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes).
+Learn more about [Learn more about Azure Database for PostgreSQL flexible server - Optimize log_duration settings for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-logging.md).
-### Optimize log_statement settings for PostgreSQL on Azure Database
+### Optimize log_min_duration settings for PostgreSQL on Azure Database
-Our system shows that you have log_statement enabled, for better performance set it to NONE
+Your log_min_duration server parameter is set to less than 60,000 ms (1 minute), which can lead to potential performance degradation. You can optimize logging settings by setting the log_min_duration_statement parameter to -1.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogStatement (Optimize log_statement settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
+Learn more about [Azure Database for PostgreSQL flexible server - Optimize log_min_duration settings for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-logging.md).
-### Increase the work_mem to avoid excessive disk spilling from sort and hash
+### Optimize log_error_verbosity settings for PostgreSQL on Azure Database
-Our system shows that the configuration work_mem is too small for your PostgreSQL server, resulting in disk spilling and degraded query performance. We recommend increasing the work_mem limit for the server, which helps to reduce the scenarios when the sort or hash happens on disk and improves the overall query performance.
+Your server has been configured to output VERBOSE error logs. This can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend changing the log_error_verbosity server parameter to the DEFAULT setting.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration).
+Learn more about [Learn more about Azure Database for PostgreSQL flexible server - Optimize log_error_verbosity settings for PostgreSQL on Azure Database](/azure/postgresql/flexible-server/concepts-logging.md).
-### Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning
+### Identify if checkpoints are happening too often to improve PostgreSQL - Flexible Server performance
-Our system suggests that you can improve storage performance by enabling Intelligent tuning
+Your sever is encountering checkpoints frequently. To resolve the issue, we recommend increasing your max_wal_size server parameter.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruIntelligentTuning (Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning)](../postgresql/flexible-server/concepts-intelligent-tuning.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Increase max_wal_size](/azure/postgresql/flexible-server/server-parameters-table-write-ahead-logcheckpoints?pivots=postgresql-16#max_wal_size).
-### Optimize log_duration settings for PostgreSQL on Azure Database
+### Identify inactive Logical Replication Slots to improve PostgreSQL - Flexible Server performance
-Our system shows that you have log_duration enabled, for better performance, set it to OFF
+Your server may have inactive logical replication slots which can result in degraded server performance and availability. We recommend deleting inactive replication slots or consuming the changes from the slots so the Log Sequence Number (LSN) advances to closer to the current LSN of the server.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogDuration (Optimize log_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Unused/inactive Logical Replication Slots](/azure/postgresql/flexible-server/how-to-autovacuum-tuning#unused-replication-slots).
-### Optimize log_min_duration settings for PostgreSQL on Azure Database
+### Identify long-running transactions to improve PostgreSQL - Flexible Server performance
-Our system shows that you have log_min_duration enabled, for better performance, set it to -1
+There are transactions running for more than 24 hours. Review the High CPU Usage-> Long Running Transactions section in the troubleshooting guides to identify and mitigate the issue.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogMinDuration (Optimize log_min_duration settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Long Running transactions using Troubleshooting guides](/azure/postgresql/flexible-server/how-to-troubleshooting-guides).
-### Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database
+### Identify Orphaned Prepared transactions to improve PostgreSQL - Flexible Server performance
-Our system shows that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE
+There are orphaned prepared transactions. Rollback/Commit the prepared transaction. The recommendations are shared in Autovacuum Blockers -> Autovacuum Blockers section in the troubleshooting guides.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruQueryCaptureMode (Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-query-store-best-practices.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Orphaned Prepared transactions using Troubleshooting guides](/azure/postgresql/flexible-server/how-to-troubleshooting-guides).
-### Optimize PostgreSQL performance by enabling PGBouncer
+### Identify Transaction Wraparound to improve PostgreSQL - Flexible Server performance
-Our system shows that you can improve PostgreSQL performance by enabling PGBouncer
+The server has crossed the 50% wraparound limit, having 1 billion transactions. Refer to the recommendations shared in the Autovacuum Blockers -> Emergency AutoVacuum and Wraparound section of the troubleshooting guides.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruOrcasPostgreSQLConnectionPooling (Optimize PostgreSQL performance by enabling PGBouncer)](../postgresql/flexible-server/concepts-pgbouncer.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô Transaction Wraparound using Troubleshooting guides](/azure/postgresql/flexible-server/how-to-troubleshooting-guides).
-### Optimize log_error_verbosity settings for PostgreSQL on Azure Database
+### Identify High Bloat Ratio to improve PostgreSQL - Flexible Server performance
-Our system shows that you have log_error_verbosity enabled, for better performance, set it to DEFAULT
+The server has a bloat_ratio (dead tuples/ (live tuples + dead tuples) > 80%). Refer to the recommendations shared in the Autovacuum Monitoring section of the troubleshooting guides.
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogErrorVerbosity (Optimize log_error_verbosity settings for PostgreSQL on Azure Database)](../postgresql/flexible-server/concepts-logging.md).
+Learn more about [Azure Database for PostgreSQL flexible server ΓÇô High Bloat Ratio using Troubleshooting guides](/azure/postgresql/flexible-server/how-to-troubleshooting-guides).
### Increase the storage limit for Hyperscale (Citus) server group
Learn more about [Hyperscale (Citus) server group - MarlinStorageLimitRecommenda
### Migrate your database from SSPG to FSPG
-Consider our new offering, Azure Database for PostgreSQL Flexible Server, which provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience.
-
-Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](../postgresql/how-to-upgrade-using-dump-and-restore.md).
-
-### Move your PostgreSQL Flexible Server to Memory Optimized SKU
-
-Our system shows that there is high churn in the buffer pool for this server, resulting in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+Consider our new offering, Azure Database for PostgreSQL Flexible Server, which provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience.
-Learn more about [PostgreSQL server - OrcasMeruMemoryUpsell (Move your PostgreSQL Flexible Server to Memory Optimized SKU)](https://aka.ms/azure_postgresql_flexible_server_pricing).
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](/azure/postgresql/how-to-upgrade-using-dump-and-restore).
### Improve your Cache and application performance when running with high network bandwidth
ai-services Luis Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-traffic-manager.md
The client-application has to manage the traffic across the keys. LUIS doesn't d
This article explains how to manage the traffic across keys with Azure [Traffic Manager][traffic-manager-marketing]. You must already have a trained and published LUIS app. If you do not have one, follow the Prebuilt domain [quickstart](luis-get-started-create-app.md). ## Connect to PowerShell in the Azure portal In the [Azure portal](https://portal.azure.com), open the PowerShell window. The icon for the PowerShell window is the **>_** in the top navigation bar. By using PowerShell from the portal, you get the latest PowerShell version and you are authenticated. PowerShell in the portal requires an [Azure Storage](https://azure.microsoft.com/services/storage/) account.
ai-services Luis User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-user-privacy.md
Delete customer data to ensure privacy and compliance.
## Summary of customer data request featuresΓÇï Language Understanding Intelligent Service (LUIS) preserves customer content to operate the service, but the LUIS user has full control over viewing, exporting, and deleting their data. This can be done through the LUIS web [portal](luis-reference-regions.md) or the [LUIS Authoring (also known as Programmatic) APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true). Customer content is stored encrypted in Microsoft regional Azure storage and includes:
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
An application that accesses an Azure AI services resource when network rules ar
> > Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services. ## Scenarios
ai-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md
Previously updated : 08/29/2023 Last updated : 06/26/2024 keywords: on-premises, OCR, Docker, container # Install Azure AI Vision 3.2 GA Read OCR container
-Containers enable you to run the Azure AI Vision APIs in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run the Read (OCR) container.
+Containers let you run the Azure AI Vision APIs in your own environment and can help you meet specific security and data governance requirements. In this article you'll learn how to download, install, and run the Azure AI Vision Read (OCR) container.
-The Read container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](how-to/call-read-api.md).
+The Read container allows you to extract printed and handwritten text from images and documents in JPEG, PNG, BMP, PDF, and TIFF file formats. For more information on the Read service, see the [Read API how-to guide](how-to/call-read-api.md).
## What's new The `3.2-model-2022-04-30` GA version of the Read container is available with support for [164 languages and other enhancements](./whats-new.md#may-2022). If you're an existing customer, follow the [download instructions](#get-the-container-image) to get started.
The Read 3.2 OCR container is the latest GA model and provides:
* Choose text line output order from default to a more natural reading order for Latin languages only. * Text line classification as handwritten style or not for Latin languages only.
-If you're using Read 2.0 containers today, see the [migration guide](read-container-migration-guide.md) to learn about changes in the new versions.
+If you're using the Read 2.0 container today, see the [migration guide](read-container-migration-guide.md) to learn about changes in the new versions.
## Prerequisites
ai-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-ocr.md
Previously updated : 04/30/2024 Last updated : 06/26/2024
OCR or Optical Character Recognition is also referred to as text recognition or text extraction. Machine-learning-based OCR techniques allow you to extract printed or handwritten text from images such as posters, street signs and product labels, as well as from documents like articles, reports, forms, and invoices. The text is typically extracted as words, text lines, and paragraphs or text blocks, enabling access to digital version of the scanned text. This eliminates or significantly reduces the need for manual data entry.
-## How is OCR related to Intelligent Document Processing (IDP)?
-Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Document Intelligence](../../ai-services/document-intelligence/overview.md). Document Intelligence includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Document Intelligence Read OCR](../../ai-services/document-intelligence/concept-read.md).
## OCR engine
-Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). It can extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. With the latest preview, it's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
+Microsoft's **Read** OCR engine is composed of multiple advanced machine-learning based models supporting [global languages](./language-support.md). It can extract printed and handwritten text including mixed languages and writing styles. **Read** is available as cloud service and on-premises container for deployment flexibility. It's also available as a synchronous API for single, non-document, image-only scenarios with performance enhancements that make it easier to implement OCR-assisted user experiences.
> [!WARNING] > The Azure AI Vision legacy [OCR API in v3.2](/rest/api/computervision/recognize-printed-text?view=rest-computervision-v3.2) and [RecognizeText API in v2.1](/rest/api/computervision/recognize-printed-text/recognize-printed-text?view=rest-computervision-v2.1) operations are not recommended for use. [!INCLUDE [read-editions](includes/read-editions.md)]
+## How is OCR related to Intelligent Document Processing (IDP)?
+
+Intelligent Document Processing (IDP) uses OCR as its foundational technology to additionally extract structure, relationships, key-values, entities, and other document-centric insights with an advanced machine-learning based AI service like [Document Intelligence](../../ai-services/document-intelligence/overview.md). Document Intelligence includes a document-optimized version of **Read** as its OCR engine while delegating to other models for higher-end insights. If you are extracting text from scanned and digital documents, use [Document Intelligence Read OCR](../../ai-services/document-intelligence/concept-read.md).
+ ## How to use OCR Try out OCR by using Vision Studio. Then follow one of the links to the Read edition that best meet your requirements.
ai-services Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/client-library.md
Previously updated : 08/07/2023 Last updated : 06/26/2024 ms.devlang: csharp # ms.devlang: csharp, golang, java, javascript, python
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/export-delete-data.md
Content Moderator collects user data to operate the service, but customers have full control to view, export, and delete their data using the [Moderation APIs](./api-reference.md). For more information on how to export and delete user data in Content Moderator, see the following table.
ai-services Create Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-bicep.md
Follow this quickstart to create Azure AI services resource using Bicep.
Azure AI services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure AI services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze. ## Things to consider
ai-services Create Account Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-resource-manager-template.md
By creating an Azure AI services resource, you can:
* Access multiple AI services in Azure with a single key and endpoint. * Consolidate billing from the services that you use. ## Prerequisites
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-delete-data.md
Custom Vision collects user data to operate the service, but customers have full control to viewing and delete their data using the Custom Vision [Training APIs](https://go.microsoft.com/fwlink/?linkid=865446). To learn how to view or delete different kinds of user data in Custom Vision, see the following table:
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md
[!INCLUDE [availability](includes/regional-availability.md)]
-Summarization is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
+Summarization is one feature offered by [Azure AI Language](../overview.md), which is a combination of generative Large Language models and task-optimized encoder models that offer summarization solutions with higher quality, cost efficiency, and lower latency.
+Use this article to learn more about this feature, and how to use it in your applications.
-Though the services are labeled document and conversation summarization, text summarization only accepts plain text blocks, and conversation summarization accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use text summarization for that scenario.
+Out of the box, the service provides summarization solutions for three types of genre, plain texts, conversations, and native documents. Text summarization only accepts plain text blocks, and conversation summarization accept conversational input, including various speech audio signals in order for the model to effectively segment and summarize, and native document can directly summarize for documents in their native formats, such as Words, PDF, etc.
# [Text summarization](#tab/text-summarization)
This documentation contains the following article types:
* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=text-summarization)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Text summarization uses natural language processing techniques to generate a summary for documents. There are two supported API approaches to automatic summarization: extractive and abstractive.
-
-Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
+These features are designed to shorten content that could be considered too long to read.
## Key features for text summarization
-There are two aspects of text summarization this API provides:
+Text summarization uses natural language processing techniques to generate a summary for plain texts, which can be from a document or a conversation, or any texts. There are two approaches of summarization this API provides:
-* [**Extractive summarization**](how-to/document-summarization.md#try-text-extractive-summarization): Produces a summary by extracting salient sentences within the document.
+* [**Extractive summarization**](how-to/document-summarization.md#try-text-extractive-summarization): Produces a summary by extracting salient sentences within the document, together the positioning information of these sentences.
* Multiple extracted sentences: These sentences collectively convey the main idea of the document. They're original sentences extracted from the input document's content.
- * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
- * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences.
+ * Rank score: The rank score indicates how relevant a sentence is to the main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+ For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences.
* Positional information: The start position and length of extracted sentences.
-* [**Abstractive summarization**](how-to/document-summarization.md#try-text-abstractive-summarization): Generates a summary that doesn't use the same words as in the document, but captures the main idea.
- * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document can be segmented so multiple groups of summary texts can be returned with their contextual input range.
- * Contextual input range: The range within the input document that was used to generate the summary text.
+* [**Abstractive summarization**](how-to/document-summarization.md#try-text-abstractive-summarization): Generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document.
+ * Summary texts: Abstractive summarization returns a summary for each contextual input range. A long input can be segmented so multiple groups of summary texts can be returned with their contextual input range.
+ * Contextual input range: The range within the input that was used to generate the summary text.
As an example, consider the following paragraph of text:
As an example, consider the following paragraph of text:
The text summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API is returned. The output is available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response can contain text offsets. For more information, see [how to process offsets](../concepts/multilingual-emoji-support.md).
-If we use the above example, the API might return these summarized sentences:
+If we use the above example, the API might return these summaries:
**Extractive summarization**: - "At Microsoft, we are on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."
This documentation contains the following article types:
Conversation summarization supports the following features:
-* [**Issue/resolution summarization**](how-to/conversation-summarization.md#get-summaries-from-text-chats): A call center specific feature that gives a summary of issues and resolutions in conversations between customer-service agents and your customers.
+* [**Recap**](how-to/conversation-summarization.md#get-recap-and-follow-up-task-summarization): Summarizes a conversation into a brief paragraph.
+* [**Issue/resolution summarization**](quickstart.md?tabs=conversation-summarization%2Cwindows&pivots=rest-api#conversation-issue-and-resolution-summarization): Call center specific features that give a summary of issues and resolutions in conversations between customer-service agents and your customers.
* [**Chapter title summarization**](how-to/conversation-summarization.md#get-chapter-titles): Segments a conversation into chapters based on the topics discussed in the conversation, and gives suggested chapter titles of the input conversation.
-* [**Recap**](how-to/conversation-summarization.md#get-narrative-summarization): Summarizes a conversation into a brief paragraph.
* [**Narrative summarization**](how-to/conversation-summarization.md#get-narrative-summarization): Generates detail call notes, meeting notes or chat summaries of the input conversation.
-* [**Follow-up tasks**](how-to/conversation-summarization.md#get-narrative-summarization): Gives a list of follow-up tasks discussed in the input conversation.
As an example, consider the following example conversation:
As an example, consider the following example conversation:
Conversation summarization feature would simplify the text as follows:
-|Example summary | Format | Conversation aspect |
+|Example summary | Remark | Conversation aspect |
||-|-|
-| Customer wants to use the wifi connection on their Smart Brew 300. But it didn't work. | One or two sentences | issue |
-| Checked if the power light is blinking slowly. Checked the Contoso coffee app. It had no prompt. Tried to do a factory reset. | One or more sentences, generated from multiple lines of the transcript. | resolution |
+| Customer is unable to set up wifi connection for Smart Brew 300 espresso machine | a customer issue in a customer-and-agent conversation | issue |
+| The agent suggested several troubleshooting steps, including checking the wifi connection, checking the Contoso Coffee app, and performing a factory reset. However, none of these steps resolved the issue. The agent then put the customer on hold to look for another solution. | solutions tried in a customer-and-agent conversation | resolution |
+| The customer contacted the agent for assistance with setting up a wifi connection for their Smart Brew 300 espresso machine. The agent guided the customer through several troubleshooting steps, including a wifi connection check, checking the power light, and a factory reset. Despite following these steps, the issue persisted. The agent then decided to explore other potential solutions | Summarizes a conversation into one paragraph | recap |
+| Troubleshooting SmartBrew 300 Espresso Machine | Segments a conversation and generates a title for each segment; usually cowork with `narrative` aspect | chapterTitle
+| The customer is having trouble setting up a wifi connection for their Smart Brew 300 espresso machine. The agent suggests several solutions, including a factory reset, but the issue persists. | Segments a conversation and generates a summary for each segment, usually cowork with `chapterTitle` aspect | narrative
-# [Document summarization](#tab/document-summarization)
+# [Document summarization (Preview)](#tab/document-summarization)
This documentation contains the following article types:
-* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=text-summarization)** are getting-started instructions to guide you through making requests to the service.
+* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service.
* **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Document summarization uses natural language processing techniques to generate a summary for documents. There are two supported API approaches to automatic summarization: extractive and abstractive.
+Document summarization uses natural language processing techniques to generate a summary for documents.
+
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for two types of summarization:
+* **Extractive summarization**: Produces a summary by extracting salient sentences within the document, together the positioning information of those sentences.
+
+ * Multiple extracted sentences: These sentences collectively convey the main idea of the document. They're original sentences extracted from the input document's content.
+ * Rank score: The rank score indicates how relevant a sentence is to the main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+ For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences.
+ * Positional information: The start position and length of extracted sentences.
+
+* **Abstractive summarization**: Generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document.
+ * Summary texts: Abstractive summarization returns a summary for each contextual input range. A long input can be segmented so multiple groups of summary texts can be returned with their contextual input range.
+ * Contextual input range: The range within the input that was used to generate the summary text.
-A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for both [**AbstractiveSummarization**](../summarization/how-to/document-summarization.md#try-text-abstractive-summarization) and [**ExtractiveSummarization**](../summarization/how-to/document-summarization.md#try-text-extractive-summarization) capabilities.
- Currently **Text Summarization** supports the following native document formats:
+ Currently **Document Summarization** supports the following native document formats:
|File type|File extension|Description| ||--|--|
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 06/19/2024 Last updated : 06/25/2024
The following Embeddings models are available with [Azure Government](/azure/azu
For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md).
-| Region | `gpt-35-turbo (0613)` | `gpt-35-turbo (1106)`| `fine tuned gpt-3.5-turbo-0125` | `gpt-4 (0613)` | `gpt-4 (1106)` | `gpt-4 (0125)` |
-|--|||||||
-| Australia East | ✅ | ✅ | | ✅ |✅ | |
-| East US | ✅ | | | | | ✅ |
-| East US 2 | ✅ | | ✅ | ✅ |✅ | |
-| France Central | ✅ | ✅ | | ✅ |✅ | |
-| Japan East | ✅ | | | | | |
-| Norway East | | | | | ✅ | |
-| Sweden Central | ✅ |✅ | ✅ |✅ |✅| |
-| UK South | ✅ | ✅ | | | ✅ | ✅ |
-| West US | | ✅ | | | ✅ | |
-| West US 3 | | | | |✅ | |
+| Region | `gpt-35-turbo (0613)` | `gpt-35-turbo (1106)`| `fine tuned gpt-3.5-turbo-0125` | `gpt-4 (0613)` | `gpt-4 (1106)` | `gpt-4 (0125)` | `gpt-4o (2024-05-13)` |
+|--|::|::|::|::|::|::|::|
+| Australia East | ✅ | ✅ | | ✅ |✅ | | |
+| East US | ✅ | | | | | ✅ | ✅ |
+| East US 2 | ✅ | | ✅ | ✅ |✅ | |✅|
+| France Central | ✅ | ✅ | | ✅ |✅ | | |
+| Japan East | ✅ | | | | | | |
+| Norway East | | | | | ✅ | | |
+| Sweden Central | ✅ |✅ | ✅ |✅ |✅| |✅|
+| UK South | ✅ | ✅ | | | ✅ | ✅ | |
+| West US | | ✅ | | | ✅ | |✅|
+| West US 3 | | | | |✅ | |✅|
## Model retirement
ai-services Dynamic Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/dynamic-quota.md
Previously updated : 01/30/2024 Last updated : 06/27/2024
For dynamic quota, consider scenarios such as:
### When does dynamic quota come into effect?
-The Azure OpenAI backend decides if, when, and how much extra dynamic quota is added or removed from different deployments. It isn't forecasted or announced in advance, and isn't predictable. Azure OpenAI lets your application know there's more quota available by responding with an HTTP 429 and not letting more API calls through. To take advantage of dynamic quota, your application code must be able to issue more requests as HTTP 429 responses become infrequent.
+The Azure OpenAI backend decides if, when, and how much extra dynamic quota is added or removed from different deployments. It isn't forecasted or announced in advance, and isn't predictable. To take advantage of dynamic quota, your application code must be able to issue more requests as HTTP 429 responses become infrequent. Azure OpenAI lets your application know when you've hit your quota limit by responding with an HTTP 429 and not letting more API calls through.
### How does dynamic quota change costs?
ai-services Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md
client = AzureOpenAI(
) response = client.chat.completions.create(
- model="gpt-35-turbo", # model = "deployment_name".
+ model="gpt-35-turbo", # model = "deployment_name"
messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the
# Send a completion call to generate an answer print('Sending a test completion job') start_phrase = 'Write a tagline for an ice cream shop. '
-response = client.completions.create(model=deployment_name, prompt=start_phrase, max_tokens=10)
+response = client.completions.create(model=deployment_name, prompt=start_phrase, max_tokens=10) # model = "deployment_name"
print(response.choices[0].text) ```
async def main():
api_version = "2024-02-01", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
- response = await client.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Hello world"}])
+ response = await client.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Hello world"}]) # model = model deployment name
print(response.model_dump_json(indent=2))
client = AzureOpenAI(
) completion = client.chat.completions.create(
- model="deployment-name", # gpt-35-instant
+ model="deployment-name", # model = "deployment_name"
messages=[ { "role": "user",
client = openai.AzureOpenAI(
) completion = client.chat.completions.create(
- model=deployment,
+ model=deployment, # model = "deployment_name"
messages=[ { "role": "user",
ai-services Provisioned Throughput Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md
Title: Azure OpenAI Service Provisioned Throughput Units (PTU) onboarding
description: Learn about provisioned throughput units onboarding and Azure OpenAI. Previously updated : 05/02/2024 Last updated : 06/25/2024
The **Provisioned** option and the capacity planner are only available in certai
||| |Model | OpenAI model you plan to use. For example: GPT-4 | | Version | Version of the model you plan to use, for example 0614 |
-| Prompt tokens | Number of tokens in the prompt for each call |
-| Generation tokens | Number of tokens generated by the model on each call |
-| Peak calls per minute | Peak concurrent load to the endpoint measured in calls per minute|
+| Peak calls per min | The number of calls per minute that are expected to be sent to the model |
+| Tokens in prompt call | The number of tokens in the prompt for each call to the model. Calls with larger prompts will utilize more of the PTU deployment. Currently this calculator assumes a single prompt value so for workloads with wide variance, we recommend benchmarking your deployment on your traffic to determine the most accurate estimate of PTU needed for your deployment. |
+| Tokens in model response | The number of tokens generated from each call to the model. Calls with larger generation sizes will utilize more of the PTU deployment. Currently this calculator assumes a single prompt value so for workloads with wide variance, we recommend benchmarking your deployment on your traffic to determine the most accurate estimate of PTU needed for your deployment. |
-After you fill in the required details, select **Calculate** to view the suggested PTU for your scenario.
+After you fill in the required details, select **Calculate** button in the output column.
+
+The values in the output column are the estimated value of PTU units required for the provided workload inputs. The first output value represents the estimated PTU units required for the workload, rounded to the nearest PTU scale increment. The second output value represents the raw estimated PTU units required for the workload. The token totals are calculated using the following equation: `Total = Peak calls per minute * (Tokens in prompt call + Tokens in model response)`.
:::image type="content" source="../media/how-to/provisioned-onboarding/capacity-calculator.png" alt-text="Screenshot of the Azure OpenAI Studio landing page." lightbox="../media/how-to/provisioned-onboarding/capacity-calculator.png":::
ai-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-batch-processing.md
The batch kit container is available for free on [GitHub](https://github.com/mic
Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download the latest batch kit container. ```bash docker pull docker.io/batchkit/speech-batch-kit:latest
aks App Routing Dns Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-dns-ssl.md
The application routing add-on with nginx delivers the following:
- Azure Key Vault if you want to configure SSL termination and store certificates in the vault hosted in Azure. - Azure DNS if you want to configure global and private zone management and host them in Azure. - To attach an Azure Key Vault or Azure DNS Zone, you need the [Owner][rbac-owner], [Azure account administrator][rbac-classic], or [Azure co-administrator][rbac-classic] role on your Azure subscription.
+- All public DNS Zones must be in the same subscription and Resource Group.
## Connect to your AKS cluster
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
You can customize CoreDNS with AKS to perform on-the-fly DNS name rewrites.
log errors rewrite stop {
- name regex (.*)\.<domain to be rewritten>.com {1}.default.svc.cluster.local
+ name regex (.*)\.<domain to be rewritten>\.com {1}.default.svc.cluster.local
answer name (.*)\.default\.svc\.cluster\.local {1}.<domain to be rewritten>.com } forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name
aks Edge Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/edge-zones.md
az aks create \
In this section you'll learn how to deploy a Kubernetes cluster in the Edge Zone. 1. Sign in to the [Azure portal](https://portal.azure.com).
aks Eks Edw Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-deploy.md
+
+ Title: Deploy AWS event-driven workflow (EDW) workload to Azure
+description: Learn how to deploy an AWS EDW workflow to Azure and how to validate your deployment.
+ Last updated : 06/20/2024++++
+# Deploy an AWS event-driven workflow (EDW) workload to Azure
+
+In this article, you will deploy an [AWS EDW workload][eks-edw-overview] to Azure.
+
+## Sign in to Azure
+
+1. Sign in to Azure using the [`az login`][az-login] command.
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. If your Azure account has multiple subscriptions, make sure to select the correct subscription. List the names and IDs of your subscriptions using the [`az account list`][az-account-list] command.
+
+ ```azurecli-interactive
+ az account list --query "[].{id: id, name:name }" --output table
+ ```
+
+1. Select a specific subscription using the [`az account set`][az-account-set] command.
+
+ ```azurecli-interactive
+ az account set --subscription $subscriptionId
+ ```
+
+## EDW workload deployment script
+
+You use the `deploy.sh` script in the `deployment` directory of the [GitHub repository][github-repo] to deploy the application to Azure.
+
+The script first checks that all of the [prerequisite tools][prerequisites] are installed. If not, the script terminates and displays an error message letting you know which prerequisites are missing. If this happens, review the prerequisites, install any missing tools, and then run the script again. The [Node autoprovisioning (NAP) for AKS][nap-aks] feature flag must be registered on your Azure subscription. If it isn't already registered, the script executes an Azure CLI command to register the feature flag.
+
+The script records the state of the deployment in a file called `deploy.state`, which is located in the `deployment` directory. You can use this file to set environment variables when deploying the app.
+
+As the script executes the commands to configure the infrastructure for the workflow, it checks that each command executes successfully. If any issues occur, an error message is displayed, and the execution stops.
+
+The script displays a log as it runs. You can persist the log by redirecting the log information output and saving it to the `install.log` file in the `logs` directory using the following command:
+
+```bash
+./deployment/infra/deploy.sh | tee ./logs/install.log
+```
+
+For more information, see the `./deployment/infra/deploy.sh` script in our [GitHub repository][github-repo].
+
+### Workload resources
+
+The deployment script creates the following Azure resources:
+
+- **Azure resource group**: The [Azure resource group][azure-resource-group] that stores the resources created by the deployment script.
+- **Azure Storage account**: The Azure Storage account that contains the queue where messages are sent by the producer app and read by the consumer app, and the table where the consumer app stores the processed messages.
+- **Azure container registry**: The container registry provides a repository for the container that deploys the refactored consumer app code.
+- **Azure Kubernetes Service (AKS) cluster**: The AKS cluster provides Kubernetes orchestration for the consumer app container and has the following features enabled:
+
+ - **Node autoprovisioning (NAP)**: The implementation of the [Karpenter](https://karpenter.sh) node autoscaler on AKS.
+ - **Kubernetes Event-driven Autoscaling (KEDA)**: [KEDA](https://keda.sh) enables pod scaling based on events, such as exceeding a specified queue depth threshold.
+ - **Workload identity**: Allows you to attach role-based access policies to pod identities for enhanced security.
+ - **Attached Azure container registry**: This feature allows the AKS cluster to pull images from repositories on the specified ACR instance.
+
+- **Application and system node pool**: The script also creates an application and system node pool in the AKS cluster that has a taint to prevent application pods from being scheduled in the system node pool.
+- **AKS cluster managed identity**: The script assigns the `acrPull` role to this managed identity, which facilitates access to the attached Azure container registry for pulling images.
+- **Workload identity**: The script assigns the **Storage Queue Data Contributor** and **Storage Table Data Contributor** roles to provide role-based access control (RBAC) access to this managed identity, which is associated with the Kubernetes service account used as the identity for pods on which the consumer app containers are deployed.
+- **Two federated credentials**: One credential enables the managed identity to implement pod identity, and the other credential is used for the KEDA operator service account to provide access to the KEDA scaler to gather the metrics needed to control pod autoscaling.
+
+## Deploy the EDW workload to Azure
+
+- Make sure you're in the `deployment` directory of the project and deploy the workload using the following commands:
+
+ ```bash
+ cd deployment
+ ./deploy.sh
+ ```
+
+## Validate deployment and run the workload
+
+Once the deployment script completes, you can deploy the workload on the AKS cluster.
+
+1. Set the source for gathering and updating the environment variables for `./deployment/environmentVariables.sh` using the following command:
+
+ ```bash
+ source ./deployment/environmentVariables.sh
+ ```
+
+1. You need the information in the `./deployment/deploy.state` file to set environment variables for the names of the resources created in the deployment. Display the contents of the file using the following `cat` command:
+
+ ```bash
+ cat ./deployment/deploy.state
+ ```
+
+ Your output should show the following variables:
+
+ ```output
+ SUFFIX=
+ RESOURCE_GROUP=
+ AZURE_STORAGE_ACCOUNT_NAME=
+ AZURE_QUEUE_NAME=
+ AZURE_COSMOSDB_TABLE=
+ AZURE_CONTAINER_REGISTRY_NAME=
+ AKS_MANAGED_IDENTITY_NAME=
+ AKS_CLUSTER_NAME=
+ WORKLOAD_MANAGED_IDENTITY_NAME=
+ SERVICE_ACCOUNT=
+ FEDERATED_IDENTITY_CREDENTIAL_NAME=
+ KEDA_SERVICE_ACCT_CRED_NAME=
+ ```
+
+1. Read the file and create environment variables for the names of the Azure resources created by the deployment script using the following commands:
+
+ ```bash
+ while IFS= read -r; line do \
+ echo "export $line" \
+ export $line; \
+ done < ./deployment/deploy.state
+ ```
+
+1. Get the AKS cluster credentials using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME
+ ```
+
+1. Verify that the KEDA operator pods are running in the `kube-system` namespace on the AKS cluster using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get pods --namespace kube-system | grep keda
+ ```
+
+ Your output should look similar to the following example output:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-keda-response.png" alt-text="Screenshot showing an example response from the command to verify that KEDA operator pods are running.":::
+
+## Generate simulated load
+
+Now, you generate simulated load using the producer app to populate the queue with messages.
+
+1. In a separate terminal window, navigate to the project directory.
+1. Set the environment variables using the steps in the [previous section](#validate-deployment-and-run-the-workload). 1. Run the producer app using the following command:
+
+ ```python
+ python3 ./app/keda/aqs-producer.py
+ ```
+
+1. Once the app starts sending messages, switch back to the other terminal window.
+1. Deploy the consumer app container onto the AKS cluster using the following commands:
+
+ ```bash
+ chmod +x ./deployment/keda/deploy-keda-app-workload-id.sh
+ ./deployment/keda/deploy-keda-app-workload-id.sh
+ ```
+
+ The deployment script (`deploy-keda-app-workload-id.sh`) performs templating on the application manifest YAML specification to pass environment variables to the pod. Review the following excerpt from this script:
+
+ ```bash
+ cat <<EOF | kubectl apply -f -
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: $AQS_TARGET_DEPLOYMENT
+ namespace: $AQS_TARGET_NAMESPACE
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aqs-reader
+ template:
+ metadata:
+ labels:
+ app: aqs-reader
+ azure.workload.identity/use: "true"
+ spec:
+ serviceAccountName: $SERVICE_ACCOUNT
+ containers:
+ - name: keda-queue-reader
+ image: ${AZURE_CONTAINER_REGISTRY_NAME}.azurecr.io/aws2azure/aqs-consumer
+ imagePullPolicy: Always
+ env:
+ - name: AZURE_QUEUE_NAME
+ value: $AZURE_QUEUE_NAME
+ - name: AZURE_STORAGE_ACCOUNT_NAME
+ value: $AZURE_STORAGE_ACCOUNT_NAME
+ - name: AZURE_TABLE_NAME
+ value: $AZURE_TABLE_NAME
+ resources:
+ requests:
+ memory: "64Mi"
+ cpu: "250m"
+ limits:
+ memory: "128Mi"
+ cpu: "500m"
+ EOF
+ ```
+
+ The `azure.workload.identity/use` label in the `spec/template` section is the pod template for the deployment. Setting the label to `true` specifies that you're using workload identity. The `serviceAccountName` in the pod specification specifies the Kubernetes service account to associate with the workload identity. While the pod specification contains a reference for an image in a private repository, there's no `imagePullSecret` specified.
+
+1. Verify that the script ran successfully using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get pods --namespace $AQS_TARGET_NAMESPACE
+ ```
+
+ You should see a single pod in the output.
+
+## Monitor scale out for pods and nodes with k9s
+
+You can use various tools to verify the operation of apps deployed to AKS, including the Azure portal and k9s. For more information on k9s, see the [k9s overview][k9s].
+
+1. Install k9s on your AKS cluster using the appropriate guidance for your environment in the [k9s installation overview][k9s-install].
+1. Create two windows, one with a view of the pods and the other with a view of the nodes in the namespace you specified in the `AQS_TARGET_NAMESPACE` environment variable (default value is `aqs-demo`) and start k9s in each window.
+
+ You should see something similar to the following:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-k9s-view.png" lightbox="media/eks-edw-deploy/sample-k9s-view.png" alt-text="Screenshot showing an example of the K9s view across two windows.":::
+
+1. After you confirm that the consumer app container is installed and running on the AKS cluster, install the `ScaledObject` and trigger authentication used by KEDA for pod autoscaling by running the scaled object installation script (`keda-scaleobject-workload-id.sh`). using the following commands:
+
+ ```bash
+ chmod +x ./deployment/keda/keda-scaleobject-workload-id.sh
+ ./deployment/keda/keda-scaleobject-workload-id.sh
+ ```
+
+ The script also performs templating to inject environment variables where needed. Review the following excerpt from this script:
+
+ ```bash
+ cat <<EOF | kubectl apply -f -
+ apiVersion: keda.sh/v1alpha1
+ kind: ScaledObject
+ metadata:
+ name: aws2az-queue-scaleobj
+ namespace: ${AQS_TARGET_NAMESPACE}
+ spec:
+ scaleTargetRef:
+ name: ${AQS_TARGET_DEPLOYMENT} #K8s deployement to target
+ minReplicaCount: 0 # We don't want pods if the queue is empty nginx-deployment
+ maxReplicaCount: 15 # We don't want to have more than 15 replicas
+ pollingInterval: 30 # How frequently we should go for metrics (in seconds)
+ cooldownPeriod: 10 # How many seconds should we wait for downscale
+ triggers:
+ - type: azure-queue
+ authenticationRef:
+ name: keda-az-credentials
+ metadata:
+ queueName: ${AZURE_QUEUE_NAME}
+ accountName: ${AZURE_STORAGE_ACCOUNT_NAME}
+ queueLength: '5'
+ activationQueueLength: '20' # threshold for when the scaler is active
+ cloud: AzurePublicCloud
+
+ apiVersion: keda.sh/v1alpha1
+ kind: TriggerAuthentication
+ metadata:
+ name: keda-az-credentials
+ namespace: $AQS_TARGET_NAMESPACE
+ spec:
+ podIdentity:
+ provider: azure-workload
+ identityId: '${workloadManagedIdentityClientId}'
+ EOF
+ ```
+
+ The manifest describes two resources: the **`TriggerAuthentication` object**, which specifies to KEDA that the scaled object is using pod identity for authentication, and the **`identityID` property**, which refers to the managed identity used as the workload identity.
+
+ When the scaled object is correctly installed and KEDA detects the scaling threshold is exceeded, it begins scheduling pods. If you're using k9s, you should see something like this:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-k9s-scheduling-pods.png" lightbox="media/eks-edw-deploy/sample-k9s-scheduling-pods.png" alt-text="Screenshot showing an example of the K9s view with scheduling pods.":::
+
+ If you allow the producer to fill the queue with enough messages, KEDA might need to schedule more pods than there are nodes to serve. To accommodate this, Karpenter will kick in and start scheduling nodes. If you're using k9s, you should see something like this:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-k9s-scheduling-nodes.png" lightbox="media/eks-edw-deploy/sample-k9s-scheduling-nodes.png" alt-text="Screenshot showing an example of the K9s view with scheduling nodes.":::
+
+ In these two images, notice how the number of nodes whose names contain `aks-default` increased from one to three nodes. If you stop the producer app from putting messages on the queue, eventually the consumers will reduce the queue depth below the threshold, and both KEDA and Karpenter will scale in. If you're using k9s, you should see something like this:
+
+ :::image type="content" source="media/eks-edw-deploy/sample-k9s-reduce.png" alt-text="Screenshot showing an example of the K9s view with reduced queue depth.":::
+
+## Clean up resources
+
+You can use the cleanup script (`/deployment/infra/cleanup.sh`) in our [GitHub repository][github-repo] to remove all the resources you created.
+
+## Next steps
+
+For more information on developing and running applications in AKS, see the following resources:
+
+- [Install existing applications with Helm in AKS][helm-aks]
+- [Deploy and manage a Kubernetes application from Azure Marketplace in AKS][k8s-aks]
+- [Deploy an application that uses OpenAI on AKS][openai-aks]
+
+<!-- LINKS -->
+[eks-edw-overview]: ./eks-edw-overview.md
+[az-login]: /cli/azure/authenticate-azure-cli-interactively#interactive-login
+[az-account-list]: /cli/azure/account#az_account_list
+[az-account-set]: /cli/azure/account#az_account_set
+[github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
+[prerequisites]: ./eks-edw-overview.md#prerequisites
+[azure-resource-group]: ../azure-resource-manager/management/overview.md
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[kubectl-get]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/
+[k9s]: https://k9scli.io/
+[k9s-install]: https://k9scli.io/topics/install/
+[helm-aks]: ./kubernetes-helm.md
+[k8s-aks]: ./deploy-marketplace.md
+[openai-aks]: ./open-ai-quickstart.md
+[nap-aks]: ./node-autoprovision.md
aks Eks Edw Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-overview.md
+
+ Title: Replicate an AWS EDW workload with KEDA and Karpenter in Azure Kubernetes Service (AKS)
+description: Learn how to replicate an AWS EKS event-driven workflow (EDW) workload with KEDA and Karpenter in AKS.
+ Last updated : 06/20/2024++++
+# Replicate an AWS event-driven workflow (EDW) workload with KEDA and Karpenter in Azure Kubernetes Service (AKS)
+
+In this article, you learn how to replicate an Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) event-driven workflow (EDW) workload with [KEDA](https://keda.sh) and [Karpenter](https://karpenter.sh) in AKS.
+
+This workload is an implementation of the [competing consumers][competing-consumers] pattern using a producer/consumer app that facilitates efficient data processing by separating data production from data consumption. You use KEDA to scale pods running consumer processing and Karpenter to autoscale Kubernetes nodes.
+
+For a more detailed understanding of the AWS workload, see [Scalable and Cost-Effective Event-Driven Workloads with KEDA and Karpenter on Amazon EKS][edw-aws-eks].
+
+## Deployment process
+
+1. [**Understand the conceptual differences**](eks-edw-understand.md): Start by reviewing the differences between AWS and AKS in terms of services, architecture, and deployment.
+1. [**Rearchitect the workload**](eks-edw-rearchitect.md): Analyze the existing AWS workload architecture and identify the components or services that you need to redesign to fit AKS. You need to make changes to the workload infrastructure, application architecture, and deployment process.
+1. [**Update the application code**](eks-edw-refactor.md): Ensure your code is compatible with Azure APIs, services, and authentication models.
+1. [**Prepare for deployment**](eks-edw-prepare.md): Modify the AWS deployment process to use the Azure CLI.
+1. [**Deploy the workload**](eks-edw-deploy.md): Deploy the replicated workload in AKS and test the workload to ensure that it functions as expected.
+
+## Prerequisites
+
+- An Azure account. If you don't have one, create a [free account][azure-free] before you begin.
+- The **Owner** [Azure built-in role][azure-built-in-roles], or the **User Access Administrator** and **Contributor** built-in roles, on a subscription in your Azure account.
+- [Azure CLI][install-cli] version 2.56 or later.
+- [Azure Kubernetes Service (AKS) preview extension][aks-preview].
+- [jq][install-jq] version 1.5 or later.
+- [Python 3][install-python] or later.
+- [kubectl][install-kubectl] version 1.21.0 or later
+- [Helm][install-helm] version 3.0.0 or later
+- [Visual Studio Code][download-vscode] or equivalent.
+
+### Download the Azure application code
+
+The **completed** application code for this workflow is available in our [GitHub repository][github-repo]. Clone the repository to a directory called `aws-to-azure-edw-workshop` on your local machine by running the following command:
+
+```bash
+git clone https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws ./aws-to-azure-edw-workshop
+```
+
+After you clone the repository, navigate to the `aws-to-azure-edw-workshop` directory and start Visual Studio Code by running the following commands:
+
+```bash
+cd aws-to-azure-edw-workshop
+code .
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Understand platform differences][eks-edw-understand]
+
+<!-- LINKS -->
+[competing-consumers]: /azure/architecture/patterns/competing-consumers
+[edw-aws-eks]: https://aws.amazon.com/blogs/containers/scalable-and-cost-effective-event-driven-workloads-with-keda-and-karpenter-on-amazon-eks/
+[azure-free]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F
+[azure-built-in-roles]: /azure/role-based-access-control/built-in-roles
+[install-cli]: /cli/azure/install-azure-cli
+[aks-preview]: ./draft.md#install-the-aks-preview-azure-cli-extension
+[install-jq]: https://jqlang.github.io/jq/
+[install-python]: https://www.python.org/downloads/
+[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
+[install-helm]: https://helm.sh/docs/intro/install/
+[download-vscode]: https://code.visualstudio.com/Download
+[github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
+[eks-edw-understand]: ./eks-edw-understand.md
aks Eks Edw Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-prepare.md
+
+ Title: Prepare to deploy the event-driven workflow (EDW) workload to Azure
+description: Take the necessary steps so you can deploy the EDW workload in Azure.
+ Last updated : 06/20/2024++++
+# Prepare to deploy the event-driven workflow (EDW) workload to Azure
+
+The AWS workload sample is deployed using Bash, CloudFormation, and AWS CLI. The consumer Python app is deployed as a container. The following sections describe how the Azure workflow is different. There are changes in the Bash scripts used to deploy the Azure Kubernetes Service (AKS) cluster and supporting infrastructure. Additionally, the Kubernetes deployment manifests are modified to configure KEDA to use an Azure Storage Queue scaler in place of the Amazon Simple Queue Service (SQS) scaler.
+
+The Azure workflow uses the [AKS Node Autoprovisioning (NAP)](/azure/aks/node-autoprovision) feature, which is based on Karpenter. This feature greatly simplifies the deployment and usage of Karpenter on AKS by eliminating the need to use Helm to deploy Karpenter explicitly. However, if you have a need to deploy Karpenter directly, you can do so using the AKS [Karpenter provider on GitHub](https://github.com/Azure/karpenter-provider-azure).
+
+## Configure Kubernetes deployment manifest
+
+AWS uses a Kubernetes deployment YAML manifest to deploy the workload to EKS. The AWS deployment YAML has references to SQS and DynamoDB for KEDA scalers, so we need to change them to specify KEDA-equivalent values for the Azure scalers to use to connect to the Azure infrastructure. To do so, configure the [Azure Storage Queue KEDA scaler][azure-storage-queue-scaler].
+
+The following code snippets show example YAML manifests for the AWS and Azure implementations.
+
+### AWS implementation
+
+```yaml
+ spec:
+ serviceAccountName: $SERVICE_ACCOUNT
+ containers:
+ - name: <sqs app name>
+ image: <name of Python app container>
+ imagePullPolicy: Always
+ env:
+ - name: SQS_QUEUE_URL
+ value: https://<Url To SQS>/<region>/<QueueName>.fifo
+ - name: DYNAMODB_TABLE
+ value: <table name>
+ - name: AWS_REGION
+ value: <region>
+```
+
+### Azure implementation
+
+```yaml
+ spec:
+ serviceAccountName: $SERVICE_ACCOUNT
+ containers:
+ - name: keda-queue-reader
+ image: ${AZURE_CONTAINER_REGISTRY_NAME}.azurecr.io/aws2azure/aqs-consumer
+ imagePullPolicy: Always
+ env:
+ - name: AZURE_QUEUE_NAME
+ value: $AZURE_QUEUE_NAME
+ - name: AZURE_STORAGE_ACCOUNT_NAME
+ value: $AZURE_STORAGE_ACCOUNT_NAME
+ - name: AZURE_TABLE_NAME
+ value: $AZURE_TABLE_NAME
+```
+
+## Set environment variables
+
+Before executing any of the deployment steps, you need to set some configuration information using the following environment variables:
+
+- `K8sversion`: The version of Kubernetes deployed on the AKS cluster.
+- `KARPENTER_VERSION`: The version of Karpenter deployed on the AKS cluster.
+- `SERVICE_ACCOUNT`: The name of the service account associated with the managed identity.
+- `AQS_TARGET_DEPLOYMENT`: The name of the consumer app container deployment.
+- `AQS_TARGET_NAMESPACE`: The namespace into which the consumer app is deployed.
+- `AZURE_QUEUE_NAME`: The name of the Azure Storage Queue.
+- `AZURE_TABLE_NAME`: The name of the Azure Storage Table that stores the processed messages.
+- `LOCAL_NAME`: A simple root for resource names constructed in the deployment scripts.
+- `LOCATION`: The Azure region where the deployment is located.
+- `TAGS`: Any user-defined tags along with their associated values.
+- `STORAGE_ACCOUNT_SKU`: The Azure Storage Account SKU.
+- `ACR_SKU`: The Azure Container Registry SKU.
+- `AKS_NODE_COUNT`: The number of nodes.
+
+You can review the `environmentVariables.sh` Bash script in the `deployment` directory of our [GitHub repository][github-repo]. These environment variables have defaults set, so you don't need to update the file unless you want to change the defaults. The names of the Azure resources are created dynamically in the `deploy.sh` script and are saved in the `deploy.state` file. You can use the `deploy.state` file to create environment variables for Azure resource names.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy the EDW workload to Azure][eks-edw-deploy]
+
+<!-- LINKS -->
+[azure-storage-queue-scaler]: https://keda.sh/docs/1.4/scalers/azure-storage-queue/
+[github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
+[eks-edw-deploy]: ./eks-edw-deploy.md
aks Eks Edw Rearchitect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-rearchitect.md
+
+ Title: Rearchitect the event-driven workflow (EDW) workload for Azure Kubernetes Service (AKS)
+description: Learn about architectural differences for replicating the AWS EKS scaling with KEDA and Karpenter event-driven workflow (EDW) workload in AKS.
+ Last updated : 06/20/2024++++
+# Rearchitect the event-driven workflow (EDW) workload for Azure Kubernetes Service (AKS)
+
+Now that you understand some key platform differences between AWS and Azure relevant to this workload, let's take a look at the workflow architecture and we can change it to work on AKS.
+
+## AWS workload architecture
+
+The AWS workload is a basic example of the [competing consumers design pattern][competing-consumers]. The AWS implementation is a reference architecture for managing scale and cost for event-driven workflows using [Kubernetes][kubernetes], [Kubernetes Event-driven Autoscaling (KEDA)][keda], and [Karpenter][karpenter].
+
+A producer app generates load through sending messages to a queue, and a consumer app running in a Kubernetes pod processes the messages and writes the results to a database. KEDA manages pod autoscaling through a declarative binding to the producer queue, and Karpenter manages node autoscaling with just enough compute to optimize for cost. Authentication to the queue and the database uses OAuth-based [service account token volume projection][service-account-volume-projection].
+
+The workload consists of an AWS EKS cluster to orchestrate consumers reading messages from an Amazon Simple Queue Service (SQS) and saving processed messages to an AWS DynamoDB table. A producer app generates messages and queues them in the AWS SQS queue. KEDA and Karpenter dynamically scale the number of EKS nodes and pods used for the consumers.
+
+The following diagram represents the architecture of the EDW workload in AWS:
++
+## Map AWS services to Azure services
+
+To recreate the AWS workload in Azure with minimal changes, use an Azure equivalent for each AWS service and keep authentication methods similar to the original. This example doesn't require the [advanced features][advanced-features-service-bus-event-hub] of Azure Service Bus or Azure Event Hubs. Instead, you can use [Azure Queue Storage][azure-queue-storage] to queue up work, and [Azure Table storage][azure-table-storage] to store results.
+
+The following table summarizes the service mapping:
+
+| **Service mapping** | **AWS service** | **Azure service** |
+|:--|:|:-|
+| Queuing | Simple Queue Service | [Azure Queue Storage][azure-queue-storage] |
+| Persistence | DynamoDB (No SQL) | [Azure Table storage][azure-table-storage] |
+| Orchestration | Elastic Kubernetes Service (EKS) | [Azure Kubernetes Service (AKS)][aks] |
+| Identity | AWS IAM | [Microsoft Entra][microsoft-entra] |
+
+### Azure workload architecture
+
+The following diagram represents the architecture of the Azure EDW workload using the [AWS to Azure service mapping](#map-aws-services-to-azure-services):
++
+## Compute options
+
+Depending on cost considerations and resilience to possible node eviction, you can choose from different types of compute.
+
+In AWS, you can choose between on-demand compute (more expensive but no eviction risk) or Spot instances (cheaper but with eviction risk). In AKS, you can choose an [on-demand node pool][on-demand-node-pool] or a [Spot node pool][spot-node-pool] depending on your workload's needs.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Refactor application code for AKS][eks-edw-refactor]
+
+<!-- LINKS -->
+[competing-consumers]: /azure/architecture/patterns/competing-consumers
+[kubernetes]: https://kubernetes.io/
+[keda]: https://keda.sh/
+[karpenter]: https://karpenter.sh/
+[service-account-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection
+[advanced-features-service-bus-event-hub]: ../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
+[azure-queue-storage]: ../storage/queues/storage-queues-introduction.md
+[azure-table-storage]: ../storage/tables/table-storage-overview.md
+[aks]: ./what-is-aks.md
+[microsoft-entra]: /entra/fundamentals/whatis
+[on-demand-node-pool]: ./create-node-pools.md
+[spot-node-pool]: ./spot-node-pool.md
+[eks-edw-refactor]: ./eks-edw-refactor.md
aks Eks Edw Refactor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-refactor.md
+
+ Title: Update application code for the event-driven workflow (EDW) workload
+description: Learn how to update the application code of the AWS EKS event-driven workflow (EDW) workload to replicate it in AKS.
+ Last updated : 06/20/2024++++
+# Update application code for the event-driven workflow (EDW) workload
+
+This article outlines key application code updates to replicate the EDW workload in Azure using Azure SDKs to work with Azure services.
+
+## Data access code
+
+### AWS implementation
+
+The AWS workload relies on AWS services and their associated data access AWS SDKs. We already [mapped AWS services to equivalent Azure services][map-aws-to-azure], so we can now create the code to access data for the producer queue and consumer results database table in Python using Azure SDKs.
+
+### Azure implementation
+
+For the data plane, the producer message body (payload) is JSON, and it doesn't need any schema changes for Azure. The original consumer app saves the processed messages in a DynamoDB table. With minor modifications to the consumer app code, we can store the processed messages in an Azure Storage Table.
+
+## Authentication code
+
+### AWS implementation
+
+The AWS workload uses a resource-based policy that defines full access to an Amazon Simple Queue Service (SQS) resource:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": "sqs:*",
+ "Resource": "*"
+ }
+ ]
+}
+```
+
+The AWS workload uses a resource-based policy that defines full access to a DynamoDB resource:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": "dynamodb:*",
+ "Resource": "*"
+ }
+ ]
+}
+```
+
+In the AWS workload, you assign these policies using the AWS CLI:
+
+```bash
+aws iam create-policy --policy-name sqs-sample-policy --policy-document <filepath/filename>.json
+aws iam create-policy --policy-name dynamodb-sample-policy --policy-document <filepath/filename>.json
+aws iam create-role --role-name keda-sample-iam-role --assume-role-policy-document <filepath/filename>.json
+
+aws iam attach-role-policy --role-name keda-sample-iam-role --policy-arn=arn:aws:iam::${<AWSAccountID>}:policy/sqs-sample-policy
+aws iam attach-role-policy --role-name keda-sample-iam-role --policy-arn=arn:aws:iam::${<AWSAccountID>}:policy/dynamodb-sample-policy
+
+# Set up trust relationship Kubernetes federated identity credential and map IAM role via kubectl annotate serviceaccount
+```
+
+### Azure implementation
+
+Let's explore how to perform similar AWS service-to-service logic within the Azure environment using AKS.
+
+You apply two Azure RBAC role definitions to control data plane access to the Azure Storage Queue and the Azure Storage Table. These roles are like the resource-based policies that AWS uses to control access to SQS and DynamoDB. Azure RBAC roles aren't bundled with the resource. Instead, you assign the roles to a service principal associated with a given resource.
+
+In the Azure implementation of the EDW workload, you assign the roles to a user-assigned managed identity linked to a workload identity in an AKS pod. The Azure Python SDKs for the Azure Storage Queue and Azure Storage Table automatically use the context of the security principal to access data in both resources.
+
+You use the [**Storage Queue Data Contributor**][storage-queue-data-contributor] role to allow the role assignee to read, write, or delete against the Azure Storage Queue, and the [**Storage Table Data Contributor**][storage-table-data-contributor] role to permit the assignee to read, write, or delete data against an Azure Storage Table.
+
+The following steps show how to create a managed identity and assign the **Storage Queue Data Contributor** and **Storage Table Data Contributor** roles using the Azure CLI:
+
+1. Create a managed identity using the [`az identity create`][az-identity-create] command.
+
+ ```azurecli-interactive
+ managedIdentity=$(az identity create \
+ --resource-group $resourceGroup \
+ --name $managedIdentityName
+ ```
+
+1. Assign the **Storage Queue Data Contributor** role to the managed identity using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ principalId=$(echo $managedIdentity | jq -r `.principalId`)
+
+ az role assignment create \
+ --assignee-object-id $principalId \
+ --assignee-principal-type ServicePrincipal
+ --role "Storage Queue Data Contributor" \
+ --scope $resourceId
+ ```
+
+1. Assign the **Storage Table Data Contributor** role to the managed identity using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ az role assignment create \
+ --assignee-object-id $principalId \
+ --assignee-principal-type ServicePrincipal
+ --role "Storage Table Data Contributor" \
+ --scope $resourceId
+ ```
+
+To see a working example, refer to the `deploy.sh` script in our [GitHub repository][github-repo].
+
+## Producer code
+
+### AWS implementation
+
+The AWS workload uses the AWS boto3 Python library to interact with AWS SQS queues to configure storage queue access. The AWS IAM `AssumeRole` capability authenticates to the SQS endpoint using the IAM identity associated with the EKS pod hosting the application.
+
+```python
+import boto3
+# other imports removed for brevity
+sqs_queue_url = "https://<region>.amazonaws.com/<queueid>/source-queue.fifo"
+sqs_queue_client = boto3.client("sqs", region_name="<region>")
+response = sqs_client.send_message(
+ QueueUrl = sqs_queue_url,
+ MessageBody = 'messageBody1',
+ MessageGroupId='messageGroup1')
+```
+
+### Azure implementation
+
+The Azure implementation uses the [Azure SDK for Python][azure-sdk-python] and passwordless OAuth authentication to interact with Azure Storage Queue services. The [`DefaultAzureCredential`][default-azure-credential] Python class is workload identity aware and uses the managed identity associated with workload identity to authenticate to the storage queue.
+
+The following example shows how to authenticate to an Azure Storage Queue using the `DefaultAzureCredential` class:
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.storage.queue import QueueClient
+# other imports removed for brevity
+
+# authenticate to the storage queue.
+account_url = "https://<storageaccountname>.queue.core.windows.net"
+default_credential = DefaultAzureCredential()
+aqs_queue_client = QueueClient(account_url, queue_name=queue_name ,credential=default_credential)
+
+aqs_queue_client.create_queue()
+aqs_queue_client.send_message('messageBody1')
+```
+
+You can review the code for the queue producer (`aqs-producer.py`) in our [GitHub repository][github-repo].
+
+## Consumer code
+
+### AWS implementation
+
+The original AWS code for DynamoDB access uses the AWS boto3 Python library to interact with AWS SQS queues. The consumer part of the workload uses the same code as the producer for connecting to the AWS SQS queue to read messages. The consumer also contains Python code to connect to DynamoDB using the AWS IAM `AssumeRole` capability to authenticate to the DynamoDB endpoint using the IAM identity associated with the EKS pod hosting the application.
+
+```python
+# presumes policy deployment ahead of time such as: aws iam create-policy --policy-name <policy_name> --policy-document <policy_document.json>
+dynamodb = boto3.resource('dynamodb', region_name='<region>')
+table = dynamodb.Table('<dynamodb_table_name>')
+table.put_item(
+ Item = {
+ 'id':'<guid>',
+ 'data':jsonMessage["<message_data>"],
+ 'srcStamp':jsonMessage["<source_timestamp_from_message>"],
+ 'destStamp':'<current_timestamp_now>',
+ 'messageProcessingTime':'<duration>'
+ }
+)
+```
+
+### Azure implementation
+
+The Azure implementation uses the Azure SDK for Python to interact with Azure Storage Tables.
+
+Now you need the producer code to authenticate to Azure Storage Table. As discussed earlier, the schema used in the preceding section with DynamoDB is incompatible with Azure Storage Table. You use a table schema that's compatible with Azure Cosmos DB to store the same data as the AWS workload in DynamoDB.
+
+This following example shows the code required for Azure:
+
+```python
+from azure.storage.queue import QueueClient
+from azure.data.tables import (TableServiceClient)
+
+ creds = DefaultAzureCredential()
+ table = TableServiceClient(
+ endpoint=f"https://{storage_account_name}.table.core.windows.net/",
+ credential=creds).get_table_client(table_name=azure_table)
+
+entity={
+ 'PartitionKey': _id,
+ 'RowKey': str(messageProcessingTime.total_seconds()),
+ 'data': jsonMessage['msg'],
+ 'srcStamp': jsonMessage['srcStamp'],
+ 'dateStamp': current_dateTime
+}
+
+response = table.insert_entity(
+ table_name=azure_table,
+ entity=entity,
+ timeout=60
+)
+```
+
+Unlike DynamoDB, the Azure Storage Table code specifies both `PartitionKey` and `RowKey`. The `PartitionKey` is similar to the ID `uniqueidentifer` in DynamoDB. A `PartitionKey` is a `uniqueidentifier` for a partition in a logical container in Azure Storage Table. The `RowKey` is a `uniqueidentifier` for all the rows in a given partition.
+
+You can review the complete producer and consumer code in our [GitHub repository][github-repo].
+
+## Create container images and push to Azure Container Registry
+
+Now, you can build the container images and push them to [Azure Container Registry (ACR)][acr-intro].
+
+In the `app` directory of the cloned repository, a shell script called `docker-command.sh` builds the container images and pushes them to ACR. Open the `.sh` file and review the code. The script builds the producer and consumer container images and pushes them to ACR. For more information, see [Introduction to container registries in Azure][acr-intro] and [Push and pull images in ACR][push-pull-acr].
+
+To build the container images and push them to ACR, make sure the environment variable `AZURE_CONTAINER_REGISTRY` is set to the name of the registry you want to push the images to, then run the following command:
+
+```bash
+./app/docker-command.sh
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Prepare to deploy the EDW workload to Azure][eks-edw-prepare]
+
+<!-- LINKS -->
+[map-aws-to-azure]: ./eks-edw-rearchitect.md#map-aws-services-to-azure-services
+[storage-queue-data-contributor]: ../role-based-access-control/built-in-roles.md#storage
+[storage-table-data-contributor]: ../role-based-access-control/built-in-roles.md#storage
+[az-identity-create]: /cli/azure/identity#az_identity_create
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[github-repo]: https://github.com/Azure-Samples/aks-event-driven-replicate-from-aws
+[azure-sdk-python]: https://github.com/Azure/azure-sdk-for-python
+[default-azure-credential]: ../storage/queues/storage-quickstart-queues-python.md#authorize-access-and-create-a-client-object
+[acr-intro]: ../container-registry/container-registry-intro.md
+[push-pull-acr]: ../container-registry/container-registry-get-started-docker-cli.md
+[eks-edw-prepare]: ./eks-edw-prepare.md
aks Eks Edw Understand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/eks-edw-understand.md
+
+ Title: Understand platform differences for the event-driven workflow (EDW) workload
+description: Learn about the key differences between the AWS and Azure platforms related to the EDW scaling workload.
+ Last updated : 06/20/2024++++
+# Understand platform differences for the event-driven workflow (EDW) workload
+
+Before you replicate the EDW workload in Azure, ensure you have a solid understanding of the operational differences between the AWS and Azure platforms.
+
+This article walks through some of the key concepts for this workload and provides links to resources for more information.
+
+## Identity and access management
+
+The AWS EDW workload uses AWS resource policies that assign AWS Identity and Access Management (IAM) roles to code running in Kubernetes pods on EKS. These roles allow those pods to access external resources such as queues or databases.
+
+Azure implements [role-based access control (RBAC)][azure-rbac] differently than AWS. In Azure, role assignments are **associated with a security principal** (user, group, managed identity, or service principal), and that security principal is associated with a resource.
+
+## Authentication between services
+
+The AWS EDW workload uses service-to-service authentication to connect with a queue and a database. AWS EKS uses `AssumeRole`, a feature of IAM, to delegate permissions to AWS services and resources. This implementation allows services to assume an IAM role that grants specific access rights, ensuring secure and limited interactions between services.
+
+For Amazon Simple Queue Service (SQS) and DynamoDB database access using service-to-service authentication, the AWS workflow uses `AssumeRole` with EKS, which is an implementation of Kubernetes [service account token volume projection][service-account-volume-projection]. In AWS, when an entity assumes an IAM role, the entity temporarily gains some extra permissions. This way, the entity can perform actions and access resources granted by the assumed role, without changing their own permissions permanently. After the assumed role's session token expires, the entity loses the extra permissions. An IAM policy is deployed that permits code running in an EKS pod to authenticate to the DynamoDB as described in the policy definition.
+
+With AKS, you can use [Microsoft Entra Managed Identity][entra-managed-id] with [Microsoft Entra Workload ID][entra-workload-id].
+
+A [user-assigned managed identity][uami] is created and granted access to an Azure Storage Table by assigning it the **Storage Table Data Contributor** role. The managed identity is also granted access to an Azure Storage Queue by assigning it the **Storage Queue Data Contributor** role. These role assignments are scoped to specific resources, allowing the managed identity to read messages in a specific Azure Storage Queue and write them to a specific Azure Storage Table. The managed identity is then mapped to a Kubernetes workload identity that will be associated with the identity of the pods where the app containers are deployed. For more information, see [Use Microsoft Entra Workload ID with AKS][use-entra-aks].
+
+On the client side, the Python Azure SDKs support a transparent means of leveraging the context of a workload identity, which eliminates the need for the developer to perform explicit authentication. Code running in a namespace/pod on AKS with an established workload identity can authenticate to external services using the mapped managed identity.
+
+## Resources
+
+The following resources can help you learn more about the differences between AWS and Azure for the technologies used in the EDW workload:
+
+| **Topic** | **AWS to Azure resource** |
+|||
+| Services | [AWS to Azure services comparison][aws-azure-services] |
+| Identity | [Mapping AWS IAM concepts to similar ones in Azure][aws-azure-identity] |
+| Accounts | [Azure AWS accounts and subscriptions][aws-azure-accounts] |
+| Resource management | [Resource containers][aws-azure-resources] |
+| Messaging | [AWS SQS to Azure Queue Storage][aws-azure-messaging] |
+| Kubernetes | [AKS for Amazon EKS professionals][aws-azure-kubernetes] |
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Rearchitect the workload for AKS][eks-edw-rearchitect]
+
+<!-- LINKS -->
+[azure-rbac]: ../role-based-access-control/overview.md
+[entra-workload-id]: /azure/architecture/aws-professional/eks-to-aks/workload-identity#microsoft-entra-workload-id-for-kubernetes
+[service-account-volume-projection]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection
+[entra-managed-id]: /entra/identity/managed-identities-azure-resources/overview
+[uami]: /azure/templates/microsoft.managedidentity/userassignedidentities?pivots=deployment-language-bicep
+[use-entra-aks]: ./workload-identity-overview.md#how-it-works
+[aws-azure-services]: /azure/architecture/aws-professional/services
+[aws-azure-identity]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/mapping-aws-iam-concepts-to-similar-ones-in-azure/ba-p/3612216
+[aws-azure-accounts]: /azure/architecture/aws-professional/accounts
+[aws-azure-resources]: /azure/architecture/aws-professional/resources
+[aws-azure-messaging]: /azure/architecture/aws-professional/messaging#simple-queue-service
+[aws-azure-kubernetes]: /azure/architecture/aws-professional/eks-to-aks/
+[eks-edw-rearchitect]: ./eks-edw-rearchitect.md
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
If you're interested in providing feedback or working closely on your migration
## Prerequisites
-* An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* Install the [Azure CLI](/cli/azure/install-azure-cli). If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker). * Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli). * When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
aks Howto Deploy Java Quarkus App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-quarkus-app.md
This article shows you how to quickly deploy Red Hat Quarkus on Azure Kubernetes
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Azure Cloud Shell has all of these prerequisites preinstalled. For more, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart). - If you're running the commands in this guide locally (instead of using Azure Cloud Shell), complete the following steps: - Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, or Windows Subsystem for Linux).
This article shows you how to quickly deploy Red Hat Quarkus on Azure Kubernetes
- Install [cURL](https://curl.se/download.html). - Install the [Quarkus CLI](https://quarkus.io/guides/cli-tooling). - Azure CLI for Unix-like environments. This article requires only the Bash variant of Azure CLI.
- - [!INCLUDE [azure-cli-login](../../includes/azure-cli-login.md)]
+ - [!INCLUDE [azure-cli-login](~/reusable-content/ce-skilling/azure/includes/azure-cli-login.md)]
- This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Create the app project
aks Howto Deploy Java Wls App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-wls-app.md
If you're interested in providing feedback or working closely on your migration
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Ensure the Azure identity you use to sign in and complete this article has either the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription or the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) and [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) roles in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview) For details on the specific roles required by WLS on AKS, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles). > [!NOTE] > These roles must be granted at the subscription level, not the resource group level.
aks Quick Kubernetes Deploy Azd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-azd.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- For ease of use, run this sample on Bash or PowerShell in the [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart).
aks Quick Kubernetes Deploy Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Make sure that the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). - To set up your environment for Bicep development, see [Install Bicep tools](../../azure-resource-manager/bicep/install.md). After completing the steps, you have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) version or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az). - To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip to [Review the Bicep file](#review-the-bicep-file).
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. * You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).
-* [!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
+* [!INCLUDE [About Bicep](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-bicep-introduction.md)]
### [Azure CLI](#tab/azure-cli)
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md). - Make sure that the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- For ease of use, try the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart). If you want to use PowerShell locally, then install the [Az PowerShell](/powershell/azure/new-azureps-module-az) module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. Make sure that you run the commands with administrative privileges. For more information, see [Install Azure PowerShell][install-azure-powershell].
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
- Deploy an AKS cluster using an Azure Resource Manager template. - Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Make sure that the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
aks Quick Windows Container Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-portal.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](/azure/cloud-shell/overview). - Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- For ease of use, try the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart). If you want to use PowerShell locally, then install the [Az PowerShell](/powershell/azure/new-azureps-module-az) module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. Make sure that you run the commands with administrative privileges. For more information, see [Install Azure PowerShell][install-azure-powershell].
aks Long Term Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/long-term-support.md
To carry out an in-place upgrade to the latest LTS version, you need to specify
```azurecli az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.32.2 ```
-> [!NOTE]
->If you use any programming/scripting logic to list and select a minor version of Kubernetes before creating clusters with the `ListKubernetesVersions` API, note that starting from Kubernetes v1.27, the API returns `SupportPlan` as `[KubernetesOfficial, AKSLongTermSupport]`. Please ensure you update any logic to exclude `AKSLongTermSupport` versions to avoid any breaks and choose `KubernetesOfficial` support plan versions. Otherwise, if LTS is indeed your path forward please first opt-into the Premium tier and the `AKSLongTermSupport` support plan versions from the `ListKubernetesVersions` API before creating clusters.
- > [!NOTE] > The next Long Term Support Version after 1.27 is to be determined. However Customers will get a minimum 6 months of overlap between 1.27 LTS and the next LTS version to plan upgrades. > Kubernetes 1.32.2 is used as an example version in this article. Check the [AKS release tracker](release-tracker.md) for available Kubernetes releases.
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
description: Learn how to use Key Management Service (KMS) etcd encryption with
Previously updated : 06/19/2024 Last updated : 06/26/2024 # Add Key Management Service etcd encryption to an Azure Kubernetes Service cluster
After you change the key ID (including changing either the key name or the key v
> [!WARNING] > Remember to update all secrets after key rotation. If you don't update all secrets, the secrets are inaccessible if the keys that were created earlier don't exist or no longer work. >
-> After you rotate the key, the previous key (key1) is still cached and shouldn't be deleted. If you want to delete the previous key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without affecting the existing cluster.
+> KMS uses 2 keys at the same time. After the first key rotation, you need to ensure both the old and new keys are valid (not expired) until the next key rotation. After the second key rotation, the oldest key can be safely removed/expired
```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-vault-network-access "Public" --azure-keyvault-kms-key-id $NEW_KEY_ID
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
This article assumes you have a basic understanding of Kubernetes concepts. For
## Prerequisites
-* [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* This article requires version 2.47.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. * Make sure that the identity that you're using to create your cluster has the appropriate minimum permissions. For more information about access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts]. * If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account set][az-account-set] command.
analysis-services Analysis Services Create Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md
This quickstart describes how to create an Analysis Services server resource in your Azure subscription by using [Bicep](../azure-resource-manager/bicep/overview.md). ## Prerequisites
analysis-services Analysis Services Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-powershell.md
This quickstart describes using PowerShell from the command line to create an Az
## Prerequisites - **Azure subscription**: Visit [Azure Free Trial](https://azure.microsoft.com/offers/ms-azr-0044p/) to create an account. - **Microsoft Entra ID**: Your subscription must be associated with a Microsoft Entra tenant and you must have an account in that directory. To learn more, see [Authentication and user permissions](analysis-services-manage-users.md).
analysis-services Analysis Services Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-template.md
This quickstart describes how to create an Analysis Services server resource in your Azure subscription by using an Azure Resource Manager template (ARM template). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
This article describes how to set up, view, and manage [Azure Monitor resource l
![Resource logging to Storage, Event Hubs, or Azure Monitor logs](./media/analysis-services-logging/aas-logging-overview.png) ## What's logged?
analysis-services Analysis Services Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-powershell.md
This article describes PowerShell cmdlets used to perform Azure Analysis Service
Server resource management tasks like creating or deleting a server, suspending or resuming server operations, or changing the service level (tier) use Azure Analysis Services cmdlets. Other tasks for managing databases like adding or removing role members, processing, or partitioning use cmdlets included in the same SqlServer module as SQL Server Analysis Services. ## Permissions
analysis-services Analysis Services Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-scale-out.md
Return status codes:
### PowerShell Before using PowerShell, [install or update the latest Azure PowerShell module](/powershell/azure/install-azure-powershell).
analysis-services Analysis Services Server Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-admins.md
If server firewall is enabled, server administrator client computer IP addresses
## PowerShell Use [New-AzAnalysisServicesServer](/powershell/module/az.analysisservices/new-azanalysisservicesserver) cmdlet to specify the Administrator parameter when creating a new server. <br> Use [Set-AzAnalysisServicesServer](/powershell/module/az.analysisservices/set-azanalysisservicesserver) cmdlet to modify the Administrator parameter for an existing server.
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-service-principal.md
Service principal appID and password or certificate can be used in connection st
### PowerShell #### <a name="azmodule"></a>Using Az.AnalysisServices module
api-center Set Up Api Center Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-arm-template.md
[!INCLUDE [quickstart-intro](includes/quickstart-intro.md)] If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
If your environment meets the prerequisites and you're familiar with using ARM t
[!INCLUDE [include](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] * For Azure PowerShell:
- [!INCLUDE [azure-powershell-requirements-no-header.md](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header.md](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Review the template
api-center Set Up Api Center Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-bicep.md
[!INCLUDE [quickstart-intro](includes/quickstart-intro.md)] [!INCLUDE [quickstart-prerequisites](includes/quickstart-prerequisites.md)]
[!INCLUDE [include](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] * For Azure PowerShell:
- [!INCLUDE [azure-powershell-requirements-no-header.md](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header.md](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Review the Bicep file
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
To follow the steps in this article, you must have:
+ An active Azure subscription.
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+ [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
+ An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
api-management Api Management Howto Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ca-certificates.md
The article shows how to manage CA certificates of an Azure API Management servi
CA certificates uploaded to API Management can only be used for certificate validation by the managed API Management gateway. If you use the [self-hosted gateway](self-hosted-gateway-overview.md), learn how to [create a custom CA for self-hosted gateway](#create-custom-ca-for-self-hosted-gateway), later in this article. ## <a name="step1"> </a>Upload a CA certificate
api-management Api Management Howto Configure Custom Domain Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-configure-custom-domain-gateway.md
To perform the steps described in this article, you must have:
- An active Azure subscription.
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+ [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md). - A self-hosted gateway. For more information, see [How to provision self-hosted gateway](api-management-howto-provision-self-hosted-gateway.md)
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
This article shows how to automate backup and restore operations of your API Man
> Restore operation doesn't change custom hostname configuration of the target service. We recommend to use the same custom hostname and TLS certificate for both active and standby services, so that, after restore operation completes, the traffic can be re-directed to the standby instance by a simple DNS CNAME change. ## Prerequisites
api-management Api Management Howto Integrate Internal Vnet Appgateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-integrate-internal-vnet-appgateway.md
For architectural guidance, see:
## Prerequisites To follow the steps described in this article, you must have: * An active Azure subscription
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+ [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* Certificates - Personal Information Exchange (PFX) files for API Management's custom host names: gateway, developer portal, and management endpoint.
api-management Api Management Howto Mutual Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md
Using key vault certificates is recommended because it helps improve API Managem
## Prerequisites * If you have not created an API Management service instance yet, see [Create an API Management service instance](get-started-create-service-instance.md). * You should have your backend service configured for client certificate authentication. To configure certificate authentication in the Azure App Service, refer to [this article][to configure certificate authentication in Azure WebSites refer to this article].
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
To set up a managed identity in the Azure portal, you'll first create an API Man
### Azure PowerShell The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
To set up a managed identity in the portal, you'll first create an API Managemen
### Azure PowerShell The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-role-based-access-control.md
Azure API Management relies on Azure role-based access control (Azure RBAC) to enable fine-grained access management for API Management services and entities (for example, APIs and policies). This article gives you an overview of the built-in and custom roles in API Management. For more information on access management in the Azure portal, see [Get started with access management in the Azure portal](../role-based-access-control/overview.md). ## Built-in service roles
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-internal-vnet.md
Use API Management in internal mode to:
For configurations specific to the *external* mode, where the API Management endpoints are accessible from the public internet, and backend services are located in the network, see [Deploy your Azure API Management instance to a virtual network - external mode](api-management-using-with-vnet.md). [!INCLUDE [api-management-virtual-network-prerequisites](../../includes/api-management-virtual-network-prerequisites.md)]
api-management Api Management Using With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-using-with-vnet.md
This article explains how to set up VNet connectivity for your API Management in
For configurations specific to the *internal* mode, where the endpoints are accessible only within the VNet, see [Deploy your Azure API Management instance to a virtual network - internal mode](./api-management-using-with-internal-vnet.md). [!INCLUDE [api-management-virtual-network-prerequisites](../../includes/api-management-virtual-network-prerequisites.md)]
api-management Credentials How To User Delegated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-user-delegated.md
In this scenario, you configure a managed [connection](credentials-overview.md)
- A running API Management instance. If you need to, [create an Azure API Management instance](get-started-create-service-instance.md). - A backend OAuth 2.0 API that you want to access on behalf of the user or group. ## Step 1: Provision Azure API Management Data Plane service principal
api-management Get Started Create Service Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance-cli.md
This quickstart describes the steps for creating a new API Management instance b
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
api-management Get Started Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance.md
This quickstart describes the steps for creating a new API Management instance u
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] ## Sign in to Azure
api-management Graphql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md
If you want to import a GraphQL schema and set up field resolvers using REST or
- Azure PowerShell
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Add a GraphQL API
api-management Import Api From Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-oas.md
In this article, you learn how to:
* Azure PowerShell
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## <a name="create-api"> </a>Import a backend API
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
In this article, you learn how to:
* Azure PowerShell
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
api-management Powershell Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/powershell-create-service-instance.md
In this quickstart, you create a new API Management instance by using Azure Powe
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] ## Prerequisites ## Create resource group
api-management Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-arm-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM tem
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
api-management Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-bicep.md
This quickstart describes how to use a Bicep file to create an Azure API Managem
[!INCLUDE [api-management-quickstart-intro](../../includes/api-management-quickstart-intro.md)] ## Prerequisites
This quickstart describes how to use a Bicep file to create an Azure API Managem
- For Azure PowerShell:
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Review the Bicep file
api-management Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-terraform.md
In this article, you learn how to:
- For Azure PowerShell:
- [!INCLUDE [azure-powershell-requirements-no-header](../../includes/azure-powershell-requirements-no-header.md)]
+ [!INCLUDE [azure-powershell-requirements-no-header](~/reusable-content/ce-skilling/azure/includes/azure-powershell-requirements-no-header.md)]
## Implement the Terraform code
api-management V2 Service Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md
The v2 tiers are available in the following regions:
* France Central * Germany West Central * North Europe
+* West Europe
+* UK South
+* UK West
* Central India
+* Brazil South
+* Australia Central
* Australia East * Australia Southeast * East Asia
api-management Vscode Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/vscode-create-service-instance.md
This quickstart describes the steps to create a new API Management instance usin
## Prerequisites Also, ensure you've installed the following:
app-service App Service Configure Premium Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configure-premium-tier.md
az appservice plan create \
### Azure PowerShell The following command creates an App Service plan in _P1V3_. The options for `-WorkerSize` are _Small_, _Medium_, and _Large_.
app-service App Service Sql Asp Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-asp-github-actions.md
In this tutorial, you learn how to:
> - Use a GitHub Actions workflow to add resources to Azure with a Azure Resource Manager template (ARM template) > - Use a GitHub Actions workflow to build an ASP.NET Core application ## Prerequisites
app-service App Service Sql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-sql-github-actions.md
In this tutorial, you learn how to:
> - Use a GitHub Actions workflow to add resources to Azure with a Azure Resource Manager template (ARM template) > - Use a GitHub Actions workflow to build a container with the latest web app changes ## Prerequisites
app-service App Service Web App Cloning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-app-cloning.md
# Azure App Service App Cloning Using PowerShell With the release of Microsoft Azure PowerShell version 1.1.0, a new option has been added to `New-AzWebApp` that lets you clone an existing App Service app to a newly created app in a different region or in the same region. This option enables customers to deploy a number of apps across different regions quickly and easily.
app-service App Service Web Tutorial Dotnet Sqldatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-dotnet-sqldatabase.md
In this tutorial, you learn how to:
> * Update the data model and redeploy the app > * Stream logs from Azure to your terminal ## Prerequisites
You can keep the generated web app name, or change it to another unique name (va
#### Create a resource group 1. Next to **Resource Group**, click **New**.
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
In this tutorial, you learn how to:
You can follow the steps in this tutorial on macOS, Linux, Windows. ## Prerequisites
In this step, you set up the local ASP.NET Core project. App Service supports th
1. To stop ASP.NET Core at any time, press `Ctrl+C` in the terminal. ## Deploy app to Azure
In this step, you deploy your .NET Core application to App Service.
### Create a resource group ### Create an App Service plan
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
When persistent storage is disabled, then writes to the `C:\home` directory aren
The only exception is the `C:\home\LogFiles` directory, which is used to store the container and application logs. This folder always persists upon app restarts if [application logging is enabled](troubleshoot-diagnostic-logs.md?#enable-application-logging-windows) with the **File System** option, independently of the persistent storage being enabled or disabled. In other words, enabling or disabling the persistent storage doesn't affect the application logging behavior.
-By default, persistent storage is *disabled* on Windows custom containers. To enable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `true` via the [Cloud Shell](https://shell.azure.com). In Bash:
+By default, persistent storage is *enabled* on Windows custom containers. To disable it, set the `WEBSITES_ENABLE_APP_SERVICE_STORAGE` app setting value to `false` via the [Cloud Shell](https://shell.azure.com). In Bash:
```azurecli-interactive
-az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=true
+az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=false
``` In PowerShell: ```azurepowershell-interactive
-Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITES_ENABLE_APP_SERVICE_STORAGE"=true}
+Set-AzWebApp -ResourceGroupName <group-name> -Name <app-name> -AppSettings @{"WEBSITES_ENABLE_APP_SERVICE_STORAGE"=false}
``` ::: zone-end
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md
This how-to guide shows you how to deploy your app to [Azure App Service](overvi
To follow the steps in this how-to guide: -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [Install Git](https://www.git-scm.com/downloads).
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
This article shows you how to deploy your code as a ZIP, WAR, JAR, or EAR packag
To complete the steps in this article, [create an App Service app](./index.yml), or use an app that you created for another tutorial. [!INCLUDE [Create a project ZIP file](../../includes/app-service-web-deploy-zip-prepare.md)]
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 6/13/2024 Last updated : 6/26/2024 # Migration to App Service Environment v3 using the side-by-side migration feature
App Service can automate migration of your App Service Environment v1 and v2 to
The side-by-side migration feature automates your migration to App Service Environment v3. The side-by-side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md). > [!IMPORTANT]
-> It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+> If you fail to complete all steps described in this tutorial, you'll experience downtime. For example, if you don't update all dependent resources with the new IP addresses or you don't allow access to/from your new subnet, such as the case for your custom domain suffix key vault, you'll experience downtime until that's addressed.
+>
+> It's recommended to use this feature for dev environments first before migrating any production environments to rehearse the process and ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
> ## Supported scenarios
For related commands to check if your subscription or resource group has locks,
If your existing App Service Environment uses a custom domain suffix, you need to [configure one for your new App Service Environment v3 resource during the migration process](#add-a-custom-domain-suffix-optional). Migration fails if you don't configure a custom domain suffix and are using one currently. For more information on App Service Environment v3 custom domain suffixes, including requirements, step-by-step instructions, and best practices, see [Custom domain suffix for App Service Environments](./how-to-custom-domain-suffix.md). > [!NOTE]
-> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment v3's new subnet. If you're accessing your key vault using a private endpoint, ensure you've configured private access correctly with the new subnet.
+> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment v3's new subnet. If you're accessing your key vault using a private endpoint, ensure you've configured private access correctly with the new subnet. You experience downtime if you fail to correctly set this access prior to migration.
> You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy can be configured by setting the `zoneRedundant` property to `true`. Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time.
app-service Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using.md
Title: Use an App Service Environment
description: Learn how to use your App Service Environment to host isolated applications. Previously updated : 03/27/2023 Last updated : 06/26/2024
Every App Service app runs in an App Service plan. App Service Environments hold
When you scale an App Service plan, the needed infrastructure is added automatically. Be aware that there's a time delay to scale operations while the infrastructure is being added. For example, when you scale an App Service plan, and you have another scale operation of the same operating system and size running, there might be a delay of a few minutes until the requested scale starts.
-A scale operation on one size and operating system won't affect scaling of the other combinations of size and operating system. For example, if you are scaling a Windows I2v2 App Service plan, a scale operation to a Windows I3v2 App Service plan starts immediately. Scaling normally takes less than 15 minutes.
+A scale operation on one size and operating system won't affect scaling of the other combinations of size and operating system. For example, if you are scaling a Windows I2v2 App Service plan, a scale operation to a Windows I3v2 App Service plan starts immediately. Scaling normally takes less than 15 minutes but can take up to 45 minutes.
In a multi-tenant App Service, scaling is immediate, because a pool of shared resources is readily available to support it. App Service Environment is a single-tenant service, so there's no shared buffer, and resources are allocated based on need.
app-service Manage Scale Per App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-per-app.md
# High-density hosting on Azure App Service using per-app scaling When using App Service, you can scale your apps by scaling the [App Service plan](overview-hosting-plans.md) they run on. When multiple apps are running in the same App Service plan, each scaled-out instance runs all the apps in the plan.
app-service Provision Resource Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/provision-resource-bicep.md
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy
## Prerequisites To effectively create resources with Bicep, you'll need to set up a Bicep [development environment](../azure-resource-manager/bicep/install.md). The Bicep extension for [Visual Studio Code](https://code.visualstudio.com/) provides language support and resource autocompletion. The extension helps you create and validate Bicep files and is recommended for those developers that will create resources using Bicep after completing this quickstart.
app-service Quickstart Multi Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md
![Sample multi-container app on Web App for Containers][1] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
cd multicontainerwordpress
## Create a resource group In the Cloud Shell, create a resource group with the [`az group create`](/cli/azure/group#az-group-create) command. The following example creates a resource group named *myResourceGroup* in the *South Central US* location. To see all supported locations for App Service on Linux in **Standard** tier, run the [`az appservice list-locations --sku S1 --linux-workers-enabled`](/cli/azure/appservice#az-appservice-list-locations) command.
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
For more information on custom containers, see [Run a custom container in Azure]
| Setting name| Description | Example | |-|-|-|
-| `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | Set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `false` for custom containers. ||
+| `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | For Linux custom containers: set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `false` for Linux custom containers.<br/><br/>For Windows containers: set to `true` to enable the `c:\home` directory to be shared across scaled instances. The default is `true` for Windows containers.||
| `WEBSITES_CONTAINER_START_TIME_LIMIT` | Amount of time in seconds to wait for the container to complete start-up before restarting the container. Default is `230`. You can increase it up to the maximum of `1800`. || | `WEBSITES_CONTAINER_STOP_TIME_LIMIT` | Amount of time in seconds to wait for the container to terminate gracefully. Default is `5`. You can increase to a maximum of `120` || | `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable isn't passed on to the container. | `https://<server-name>.azurecr.io` |
app-service Cli Continuous Deployment Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md
This sample script creates an app in App Service with its related resources, and
* An Azure DevOps repository with application code, that you have administrative permissions for. * A [Personal Access Token (PAT)](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate) for your Azure DevOps organization. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### To create the web app
az webapp deployment source config --name $webapp --resource-group $resourceGrou
## Clean up resources ```azurecli az group delete --name $resourceGroup
app-service Cli Linux Acr Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-linux-acr-aspnetcore.md
This sample script creates a resource group, a Linux App Service plan, and an app. It then deploys an ASP.NET Core application using a Docker Container from the Azure Container Registry. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
This sample script creates a resource group, a Linux App Service plan, and an ap
## Clean up resources ```azurecli az group delete --name $resourceGroup
app-service Powershell Backup Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-delete.md
To run this script, you need an existing backup for a web app. To create one, se
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-delete/backup-delete.ps1?highlight=1-2,11 "Delete a backup for a web app")]
app-service Powershell Backup Onetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-onetime.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-onetime/backup-onetime.ps1?highlight=1-5 "Back up a web app")]
app-service Powershell Backup Restore Diff Sub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-restore-diff-sub.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-restore-diff-sub/backup-restore-diff-sub.ps1?highlight=1-6 "Restore a web app from a backup in another subscription")]
app-service Powershell Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-restore.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-restore/backup-restore.ps1?highlight=1-2 "Restore a web app from a backup")]
app-service Powershell Backup Scheduled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-backup-scheduled.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/backup-scheduled/backup-scheduled.ps1?highlight=1-4 "Create a scheduled backup for a web app")]
app-service Powershell Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-configure-custom-domain.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/map-custom-domain/map-custom-domain.ps1?highlight=1 "Assign a custom domain to a web app")]
app-service Powershell Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-configure-ssl-certificate.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/configure-ssl-certificate/configure-ssl-certificate.ps1?highlight=1-3 "Bind a custom TLS/SSL certificate to a web app")]
app-service Powershell Connect To Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-connect-to-sql.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/connect-to-sql/connect-to-sql.ps1?highlight=13 "Connect an app to SQL Database")]
app-service Powershell Connect To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-connect-to-storage.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/connect-to-storage/connect-to-storage.ps1 "Connect an app to a storage account")]
app-service Powershell Continuous Deployment Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-continuous-deployment-github.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-github-continuous/deploy-github-continuous.ps1?highlight=1-2 "Create a web app with continuous deployment from GitHub")]
app-service Powershell Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-ftp.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-ftp/deploy-ftp.ps1?highlight=1 "Upload files to a web app using FTP")]
app-service Powershell Deploy Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-github.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-github/deploy-github.ps1?highlight=1-2 "Create a web app and deploy code from GitHub")]
app-service Powershell Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-local-git.md
If needed, update to the latest Azure PowerShell using the instruction found in
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-local-git/deploy-local-git.ps1?highlight=1 "Create a web app and deploy code from a local Git repository")]
app-service Powershell Deploy Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-private-endpoint.md
This sample script creates an app in App Service with its related resources, and then deploys a Private Endpoint. ## Sample script
app-service Powershell Deploy Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-deploy-staging-environment.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/deploy-deployment-slot/deploy-deployment-slot.ps1?highlight=1 "Create a web app and deploy code to a staging environment")]
app-service Powershell Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-monitor.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/monitor-with-logs/monitor-with-logs.ps1 "Monitor a web app with web server logs")]
app-service Powershell Scale High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-scale-high-availability.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/scale-geographic/scale-geographic.ps1 "Scale a web app worldwide with a high-availability architecture")]
app-service Powershell Scale Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/powershell-scale-manual.md
If needed, install the Azure PowerShell using the instruction found in the [Azur
## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/app-service/scale-manual/scale-manual.ps1 "Scale a web app manually")]
app-service Template Deploy Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/template-deploy-private-endpoint.md
In this quickstart, you use an Azure Resource Manager (ARM) template to create a web app and expose it with a private endpoint. ## Prerequisite
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md
When you set up a domain or TLS/SSL certificate for your web apps in Azure App S
At any point in this article, you can get more help by contacting Azure experts on the [Microsoft Q & A and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, to file an Azure support incident, go to the [Azure Support site](https://azure.microsoft.com/support/options/), and select **Get Support**. ## Certificate problems
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
Before your source code is executed on the frontend, the App Service injects the
## Prerequisites - [Node.js (LTS)](https://nodejs.org/download/) [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
app-service Tutorial Connect App Access Sql Database As User Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-sql-database-as-user-dotnet.md
What you will learn:
> [!NOTE] >Microsoft Entra authentication is _different_ from [Integrated Windows authentication](/previous-versions/windows/it-pro/windows-server-2003/cc758557(v=ws.10)) in on-premises Active Directory (AD DS). AD DS and Microsoft Entra ID use completely different authentication protocols. For more information, see [Microsoft Entra Domain Services documentation](../active-directory-domain-services/index.yml). ## Prerequisites
If you haven't already, follow one of the two tutorials first. Alternatively, yo
Prepare your environment for the Azure CLI. <a name='1-configure-database-server-with-azure-ad-authentication'></a>
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
What you will learn:
> * Connect to the Azure database from your code (.NET Framework 4.8, .NET 6, Node.js, Python, Java) using a managed identity. > * Connect to the Azure database from your development environment using the Microsoft Entra user. ## Prerequisites
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
What you will learn:
> [!NOTE] >Microsoft Entra authentication is _different_ from [Integrated Windows authentication](/previous-versions/windows/it-pro/windows-server-2003/cc758557(v=ws.10)) in on-premises Active Directory (AD DS). AD DS and Microsoft Entra ID use completely different authentication protocols. For more information, see [Microsoft Entra Domain Services documentation](../active-directory-domain-services/index.yml). ## Prerequisites
app-service Tutorial Custom Container Sidecar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container-sidecar.md
For more information about sidecars, see [Sidecar pattern](/azure/architecture/p
> [!NOTE] > For the preview period, sidecar support must be enabled at app creation. There's currently no way to enable sidecar support for an existing app. ## 1. Set up the needed resources
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
In this tutorial, you learn how to:
> * Stream diagnostic logs from App Service > * Add additional instances to scale out the sample app ## Prerequisites
app-service Tutorial Java Tomcat Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md
> * Configure a Tomcat web application to use Microsoft Entra authentication with PostgreSQL Database. > * Connect to PostgreSQL Database with Managed Identity using Service Connector. ## Prerequisites
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
In this tutorial, you learn how to:
> * Connect to Azure Database for MySQL > * Troubleshoot errors ## Prerequisites
cd multicontainerwordpress
## Create a resource group In Cloud Shell, create a resource group with the [`az group create`](/cli/azure/group#az-group-create) command. The following example creates a resource group named *myResourceGroup* in the *South Central US* location. To see all supported locations for App Service on Linux in **Standard** tier, run the [`az appservice list-locations --sku S1 --linux-workers-enabled`](/cli/azure/appservice#az-appservice-list-locations) command.
app-service Tutorial Multi Region App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-region-app.md
What you'll learn:
## Prerequisites To complete this tutorial:
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
This tutorial shows how to create a secure PHP app in Azure App Service that's c
:::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-browse-app-2.png" alt-text="Screenshot of the Azure app example titled Task List showing new tasks added."::: ## Sample application
app-service Tutorial Secure Ntier App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-ntier-app.md
What you'll learn:
The tutorial uses two sample Node.js apps that are hosted on GitHub. If you don't already have a GitHub account, [create an account for free](https://github.com/). To complete this tutorial:
app-service Tutorial Troubleshoot Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-troubleshoot-monitor.md
In this tutorial, you learn how to:
You can follow the steps in this tutorial on macOS, Linux, Windows. ## Prerequisites
application-gateway Application Gateway Configure Ssl Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-configure-ssl-policy-powershell.md
Learn how to configure TLS/SSL policy versions and cipher suites on Application Gateway. You can select from a list of predefined policies that contain different configurations of TLS policy versions and enabled cipher suites. You also have the ability to define a [custom TLS policy](#configure-a-custom-tls-policy) based on your requirements. > [!NOTE] > We recommend using TLS 1.2 as your minimum TLS protocol version for better security on your Application Gateway.
application-gateway Application Gateway Create Probe Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-create-probe-ps.md
In this article, you add a custom probe to an existing application gateway with PowerShell. Custom probes are useful for applications that have a specific health check page or for applications that do not provide a successful response on the default web application. [!INCLUDE [azure-ps-prerequisites-include.md](../../includes/azure-ps-prerequisites-include.md)]
application-gateway Application Gateway End To End Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-end-to-end-ssl-powershell.md
This scenario will:
## Before you begin To configure end-to-end TLS with an application gateway, a certificate is required for the gateway and certificates are required for the backend servers. The gateway certificate is used to derive a symmetric key as per TLS protocol specification. The symmetric key is then used encrypt and decrypt the traffic sent to the gateway. The gateway certificate needs to be in Personal Information Exchange (PFX) format. This file format allows you to export the private key that is required by the application gateway to perform the encryption and decryption of traffic.
application-gateway Application Gateway Ilb Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ilb-arm.md
This article walks you through the steps to configure a Standard v1 Application
## Before you begin 1. Install the latest version of the Azure PowerShell module by following the [install instructions](/powershell/azure/install-azure-powershell). 2. You create a virtual network and a subnet for Application Gateway. Make sure that no virtual machines or cloud deployments are using the subnet. Application Gateway must be by itself in a virtual network subnet.
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-troubleshooting-502.md
Learn how to troubleshoot bad gateway (502) errors received when using Azure Application Gateway. ## Overview
application-gateway Certificates For Backend Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/certificates-for-backend-authentication.md
Previously updated : 12/27/2022 Last updated : 06/27/2024
From your TLS/SSL certificate, export the public key .cer file (not the private
1. To obtain a .cer file from the certificate, open **Manage user certificates**. Locate the certificate, typically in 'Certificates - Current User\Personal\Certificates', and right-click. Click **All Tasks**, and then click **Export**. This opens the **Certificate Export Wizard**. If you want to open Certificate Manager in current user scope using PowerShell, you type *certmgr* in the console window.
-> [!NOTE]
-> If you can't find the certificate under Current User\Personal\Certificates, you may have accidentally opened "Certificates - Local Computer", rather than "Certificates - Current User").
+ > [!NOTE]
+ > If you can't find the certificate under Current User\Personal\Certificates, you may have accidentally opened "Certificates - Local Computer", rather than "Certificates - Current User").
![Screenshot shows the Certificate Manager with Certificates selected and a contextual menu with All tasks, then Export selected.](./media/certificates-for-backend-authentication/export.png)
-1. In the Wizard, click **Next**.
+2. In the Wizard, click **Next**.
![Export certificate](./media/certificates-for-backend-authentication/exportwizard.png)
-1. Select **No, do not export the private key**, and then click **Next**.
+3. Select **No, do not export the private key**, and then click **Next**.
![Do not export the private key](./media/certificates-for-backend-authentication/notprivatekey.png)
-1. On the **Export File Format** page, select **Base-64 encoded X.509 (.CER).**, and then click **Next**.
+4. On the **Export File Format** page, select **Base-64 encoded X.509 (.CER).**, and then click **Next**.
![Base-64 encoded](./media/certificates-for-backend-authentication/base64.png)
-1. For **File to Export**, **Browse** to the location to which you want to export the certificate. For **File name**, name the certificate file. Then, click **Next**.
+5. For **File to Export**, **Browse** to the location to which you want to export the certificate. For **File name**, name the certificate file. Then, click **Next**.
![Screenshot shows the Certificate Export Wizard where you specify a file to export.](./media/certificates-for-backend-authentication/browse.png)
-1. Click **Finish** to export the certificate.
+6. Click **Finish** to export the certificate.
![Screenshot shows the Certificate Export Wizard after you complete the file export.](./media/certificates-for-backend-authentication/finish-screen.png)
-1. Your certificate is successfully exported.
+7. Your certificate is successfully exported.
![Screenshot shows the Certificate Export Wizard with a success message.](./media/certificates-for-backend-authentication/success.png)
From your TLS/SSL certificate, export the public key .cer file (not the private
![Screenshot shows a certificate symbol.](./media/certificates-for-backend-authentication/exported.png)
-1. If you open the exported certificate using Notepad, you see something similar to this example. The section in blue contains the information that is uploaded to application gateway. If you open your certificate with Notepad and it doesn't look similar to this, typically this means you didn't export it using the Base-64 encoded X.509(.CER) format. Additionally, if you want to use a different text editor, understand that some editors can introduce unintended formatting in the background. This can create problems when uploaded the text from this certificate to Azure.
+8. If you open the exported certificate using Notepad, you see something similar to this example. The section in blue contains the information that is uploaded to application gateway. If you open your certificate with Notepad and it doesn't look similar to this, typically this means you didn't export it using the Base-64 encoded X.509(.CER) format. Additionally, if you want to use a different text editor, understand that some editors can introduce unintended formatting in the background. This can create problems when uploaded the text from this certificate to Azure.
![Open with Notepad](./media/certificates-for-backend-authentication/format.png)
application-gateway Classic To Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/classic-to-resource-manager.md
Previously updated : 02/10/2022 Last updated : 06/27/2024
For more information on how to set up an Application Gateway resource after VNet
The word "classic" in classic networking service refers to networking resources managed by Azure Service Manager (ASM). Azure Service Manager (ASM) is the old control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations.
+> [!NOTE]
+> To view all the classic resources in your subscription, Open the **All Resources** blade and look for a **(Classic)** suffix after the resource name.
+ ### What is Azure Resource Manager? Azure Resource Manager is the latest control plane of Azure responsible for creating, managing, deleting VMs and performing other control plane operations.
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
Previously updated : 09/14/2023 Last updated : 06/27/2024
A frontend IP address is associated to a *listener*, which checks for incoming r
You can create private and public listeners with the same port number. However, be aware of any network security group (NSG) associated with the Application Gateway subnet. Depending on your NSG's configuration, you might need an allow-inbound rule with **Destination IP addresses** as your application gateway's public and private frontend IPs. When you use the same port, your application gateway changes the **Destination** of the inbound flow to the frontend IPs of your gateway.
+> [!NOTE]
+> Currently, the use of the same port number for public and private TCP/TLS protocol or IPv6 listeners is not supported.
+ **Inbound rule**: - **Source**: According to your requirement
application-gateway Configuration Listeners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-listeners.md
# Application Gateway listener configuration A listener is a logical entity that checks for incoming connection requests by using the port, protocol, host, and IP address. When you configure the listener, you must enter values for these that match the corresponding values in the incoming request on the gateway.
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
Configuring the gateway using a frontend private IP address is useful for intern
This article guides you through the steps to configure a Standard v2 Application Gateway with an ILB using the Azure portal. ## Sign in to Azure
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-multiple-sites-portal.md
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
application-gateway Create Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-ssl-portal.md
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md
In this article, you learn how to:
:::image type="content" source="./media/application-gateway-create-url-route-portal/scenario.png" alt-text="Diagram of application gateway URL routing example." lightbox="./media/application-gateway-create-url-route-portal/scenario.png"::: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
application-gateway How To Troubleshoot Application Gateway Session Affinity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-to-troubleshoot-application-gateway-session-affinity-issues.md
Learn how to diagnose and resolve session affinity issues with Azure Application Gateway. ## Overview
application-gateway Ipv6 Application Gateway Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ipv6-application-gateway-arm-template.md
# Deploy an Azure Application Gateway with an IPv6 frontend - ARM template If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
application-gateway Migrate V1 V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/migrate-v1-v2.md
This article primarily helps with the configuration migration. Client traffic mi
* If a public IP address is provided, ensure that it's in a succeeded state. If not provided and AppGWResourceGroupName is provided ensure that public IP resource with name AppGWV2Name-IP doesnΓÇÖt exist in a resource group with the name AppGWResourceGroupName in the V1 subscription. * Ensure that no other operation is planned on the V1 gateway or any associated resources during migration. > [!IMPORTANT] >Run the `Set-AzContext -Subscription <V1 application gateway SubscriptionId>` cmdlet every time before running the migration script. This is necessary to set the active Azure context to the correct subscription, because the migration script might clean up the existing resource group if it doesn't exist in current subscription context. This is not a mandatory step for version 1.0.11 & above of the migration script.
application-gateway Mutual Authentication Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-powershell.md
This article describes how to use the PowerShell to configure mutual authenticat
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The following table displays a comparison between Basic and Standard_v2.
| Feature | Capabilities | Basic SKU (preview)| Standard SKU | | :: | : | :: | :: | | Reliability | SLA | 99.9 | 99.95 |
-| Functionality - basic | HTTP/HTTP2/HTTPS<br>Websocket<br>Public/Private IP<br>Cookie Affinity<br>Path-based affinity<br>Wildcard<br>Multisite<br>KeyVault<br>AKS (via AGIC)<br>Zone<br>Header rewrite | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br> | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; |
+| Functionality - basic | HTTP/HTTP2/HTTPS<br>Websocket<br>Public/Private IP<br>Cookie Affinity<br>Path-based affinity<br>Wildcard<br>Multisite<br>KeyVault<br>AKS (via AGIC)<br>Zone<br>Header rewrite | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; |
| Functionality - advanced | URL rewrite<br>mTLS<br>Private Link<br>Private-only<sup>1</sup><br>TCP/TLS Proxy | | &#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713;<br>&#x2713; | | Scale | Max. connections per second<br>Number of listeners<br>Number of backend pools<br>Number of backend servers per pool<br>Number of rules | 200<sup>1</sup><br>5<br>5<br>5<br>5 | 62500<sup>1</sup><br>100<br>100<br>1200<br>400 | | Capacity Unit | Connections per second per compute unit<br>Throughput<br>Persistent new connections | 10<br>2.22 Mbps<br>2500 | 50<br>2.22 Mbps<br>2500 |
application-gateway Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-bicep.md
In this quickstart, you use Bicep to create an Azure Application Gateway. Then you test the application gateway to make sure it works correctly. The Standard v2 SKU is used in this example. :::image type="content" source="./media/quick-create-portal/application-gateway-qs-resources.png" alt-text="Conceptual diagram of the quickstart setup." lightbox="./media/quick-create-portal/application-gateway-qs-resources.png":::
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
The application gateway directs application web traffic to specific resources in
You can also complete this quickstart using [Azure PowerShell](quick-create-powershell.md) or the [Azure portal](quick-create-portal.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-powershell.md
You can also complete this quickstart using [Azure CLI](quick-create-cli.md) or
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - [Azure PowerShell version 1.0.0 or later](/powershell/azure/install-azure-powershell) (if you run Azure PowerShell locally). ## Connect to Azure
application-gateway Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-template.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to
:::image type="content" source="./media/quick-create-portal/application-gateway-qs-resources.png" alt-text="Conceptual diagram of the quickstart setup." lightbox="./media/quick-create-portal/application-gateway-qs-resources.png"::: You can also complete this quickstart using the [Azure portal](quick-create-portal.md), [Azure PowerShell](quick-create-powershell.md), or [Azure CLI](quick-create-cli.md).
application-gateway Redirect External Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-external-site-cli.md
In this article, you learn how to:
* Create a listener and redirection rule * Create an application gateway [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Redirect External Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-external-site-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Redirect Http To Https Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-cli.md
In this article, you learn how to:
* Add a listener and redirection rule * Create a Virtual Machine Scale Set with the default backend pool [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Redirect Http To Https Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-portal.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This tutorial requires the Azure PowerShell module version 1.0.0 or later to create a certificate and install IIS. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). To run the commands in this tutorial, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Redirect Http To Https Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This tutorial requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). To run the commands in this tutorial, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Redirect Internal Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-cli.md
In this article, you learn how to:
* Create a virtual machine scale set with the backend pool * Create a CNAME record in your domain [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Redirect Internal Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Renew Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/renew-certificates.md
Upload your new PFX certificate, give it a name, type the password, and then sel
### Azure PowerShell To renew your certificate using Azure PowerShell, use the following script:
application-gateway Create Vmss Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-cli.md
This script creates an application gateway that uses a virtual machine scale set
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)] ## Sample script
application-gateway Create Vmss Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-powershell.md
This script creates an application gateway that uses a virtual machine scale set
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] ## Sample script
application-gateway Waf Custom Rules Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/waf-custom-rules-powershell.md
If you choose to install and use Azure PowerShell locally, this script requires
1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). 2. To create a connection with Azure, run `Connect-AzAccount`. ## Sample script
application-gateway Tutorial Autoscale Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-autoscale-ps.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites This tutorial requires that you run an administrative Azure PowerShell session locally. You must have Azure PowerShell module version 1.0.0 or later installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
In this tutorial, you learn how to:
> * Deploy a sample application using AGIC for ingress on the AKS cluster > * Check that the application is reachable through application gateway [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
In this tutorial, you learn how to:
> * Deploy a sample application by using AGIC for ingress on the AKS cluster. > * Check that the application is reachable through application gateway. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Manage Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-manage-web-traffic-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Manage Web Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-powershell.md
If you prefer, you can complete this procedure using [Azure CLI](tutorial-manage
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-multiple-sites-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Multiple Sites Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-ssl-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Ssl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md
The following example shows site traffic coming from both ports 8080 and 8081 an
If you prefer, you can complete this tutorial using [Azure PowerShell](tutorial-url-redirect-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Url Redirect Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-powershell.md
If you prefer, you can complete this procedure using [Azure CLI](tutorial-url-re
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this procedure requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
application-gateway Tutorial Url Route Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-url-route-powershell.md) or the [Azure portal](create-url-route-portal.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
application-gateway Tutorial Url Route Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-powershell.md
If you prefer, you can complete this procedure using [Azure CLI](tutorial-url-ro
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. To find the version, run `Get-Module -ListAvailable Az` . If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
attestation Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-Bicep.md
Last updated 03/08/2022
[Microsoft Azure Attestation](overview.md) is a solution for attesting Trusted Execution Environments (TEEs). This quickstart focuses on the process of deploying a Bicep file to create a Microsoft Azure Attestation policy. ## Prerequisites
attestation Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/quickstart-template.md
Last updated 01/30/2024
[Microsoft Azure Attestation](overview.md) is a solution for attesting Trusted Execution Environments (TEEs). This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a Microsoft Azure Attestation policy. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
automation Automation Alert Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-alert-metric.md
Title: Monitor Azure Automation runbooks with metric alerts
description: This article describes how to setup a metric alert based on runbook completion status. Last updated 08/10/2020-+ # Monitor runbooks with metric alerts
automation Automation Dsc Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-diagnostics.md
Azure Monitor Logs provides greater operational visibility to your Automation St
- Correlate compliance status across Automation accounts. - Use custom views and search queries to visualize your runbook results, runbook job status, and other related key indicators or metrics. ## Prerequisites
automation Automation Runbook Execution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-execution.md
Title: Runbook execution in Azure Automation
description: This article provides an overview of the processing of runbooks in Azure Automation. Previously updated : 12/28/2022 Last updated : 06/27/2024
The following diagram shows the lifecycle of a runbook job for [PowerShell runbo
![Job Statuses - PowerShell Workflow](./media/automation-runbook-execution/job-statuses.png) ## Runbook execution environment
The following table describes the statuses that are possible for a job. You can
| Stopping |The system is stopping the job. | | Suspended |Applies to [graphical and PowerShell Workflow runbooks](automation-runbook-types.md) only. The job was suspended by the user, by the system, or by a command in the runbook. If a runbook doesn't have a checkpoint, it starts from the beginning. If it has a checkpoint, it can start again and resume from its last checkpoint. The system only suspends the runbook when an exception occurs. By default, the `ErrorActionPreference` variable is set to Continue, indicating that the job keeps running on an error. If the preference variable is set to Stop, the job suspends on an error. | | Suspending |Applies to [graphical and PowerShell Workflow runbooks](automation-runbook-types.md) only. The system is trying to suspend the job at the request of the user. The runbook must reach its next checkpoint before it can be suspended. If it has already passed its last checkpoint, it completes before it can be suspended. |
+| New | The job has been submitted recently but is not yet activated.|
## Activity logging
automation Automation Tutorial Installed Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-tutorial-installed-software.md
First you need to enable Change tracking and Inventory for this tutorial. If you
2. Choose the [Log Analytics](../azure-monitor/logs/log-query-overview.md) workspace. This workspace collects data that is generated by features such as Change Tracking and Inventory. The workspace provides a single location to review and analyze data from multiple sources. 3. Select the Automation account to use.
automation Manage Inventory Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/manage-inventory-vms.md
The following sections provide information about each property that can be confi
Inventory allows you to create and view machine groups in Azure Monitor logs. Machine groups are collections of machines defined by a query in Azure Monitor logs. To view your machine groups select the **Machine groups** tab on the Inventory page.
automation Automation Region Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/automation-region-dns-records.md
Title: Azure Datacenter DNS records used by Azure Automation | Microsoft Docs
description: This article provides the DNS records required by Azure Automation features when restricting communication to a specific Azure region hosting that Automation account. Previously updated : 06/29/2021 Last updated : 06/28/2024
To support [Private Link](../../private-link/private-link-overview.md) in Azure
| China East 2 |`https://<accountId>.webhook.sha2.azure-automation.cn`<br>`https://<accountId>.agentsvc.sha2.azure-automation.cn`<br>`https://<accountId>.jrds.sha2.azure-automation.cn` | | China North |`https://<accountId>.webhook.bjb.azure-automation.cn`<br>`https://<accountId>.agentsvc.bjb.azure-automation.cn`<br>`https://<accountId>.jrds.bjb.azure-automation.cn` | | China North 2 |`https://<accountId>.webhook.bjs2.azure-automation.cn`<br>`https://<accountId>.agentsvc.bjs2.azure-automation.cn`<br>`https://<accountId>.jrds.bjs2.azure-automation.cn` |
+| China North 3 | `https://<accountId>.webhook.cnn3.azure-automation.cn` </br> `https://<accountId>.agentsvc.cnn3.azure-automation.cn` </br> `https://<accountId>.jrds.cnn3.azure-automation.cn` |
| West Europe |`https://<accountId>.webhook.we.azure-automation.net`<br>`https://<accountId>.agentsvc.we.azure-automation.net`<br>`https://<accountId>.jrds.we.azure-automation.net` | | North Europe |`https://<accountId>.webhook.ne.azure-automation.net`<br>`https://<accountId>.agentsvc.ne.azure-automation.net`<br>`https://<accountId>.jrds.ne.azure-automation.net` | | France Central |`https://<accountId>.webhook.fc.azure-automation.net`<br>`https://<accountId>.agentsvc.fc.azure-automation.net`<br>`https://<accountId>.jrds.fc.azure-automation.net` |
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md
These Azure services can work with Automation job and runbook resources using an
* [Azure Event Grid](../event-grid/handler-webhooks.md) * [Azure Power Automate](/connectors/azureautomation) ## Pricing for Azure Automation
automation Quickstart Create Automation Account Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstart-create-automation-account-template.md
Azure Automation delivers a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. This article shows you how to deploy an Azure Resource Manager template (ARM template) that creates an Automation account. Using an ARM template takes fewer steps compared to other deployment methods. The JSON template specifies default values for parameters that would likely be used as a standard configuration in your environment. You can store the template in an Azure storage account for shared access in your organization. For more information about working with templates, see [Deploy resources with ARM templates and the Azure CLI](../azure-resource-manager/templates/deploy-cli.md). The sample template does the following steps:
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md
> [!CAUTION] > This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> [!NOTE]
+> Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+ By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows and Linux servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure Linux VM and deploying a LAMP stack using Azure Automation State Configuration. ## Prerequisites
automation Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/credentials.md
An Automation credential asset holds an object that contains security credential
>[!NOTE] >Secure assets in Azure Automation include credentials, certificates, connections, and encrypted variables. These assets are encrypted and stored in Azure Automation using a unique key that is generated for each Automation account. Azure Automation stores the key in the system-managed Key Vault. Before storing a secure asset, Automation loads the key from Key Vault and then uses it to encrypt the asset. ## PowerShell cmdlets used to access credentials
automation Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/onboarding.md
After you remove the feature resources, you can unlink your workspace. It's impo
## <a name="mma-extension-failures"></a>Log Analytics for Windows extension failures An installation of the Log Analytics agent for Windows extension can fail for a variety of reasons. The following section describes feature deployment issues that can cause failures during deployment of the Log Analytics agent for Windows extension.
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
In this tutorial, you learn how to:
## Prerequisites Finish the quickstart [Create a .NET app with App Configuration](./quickstart-dotnet-core-app.md).
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
In this tutorial, you learn how to:
- [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above. - An existing Azure App Configuration Store. ## Setup Push Refresh
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
In this article, you learn how to set up Azure App Configuration event subscript
- Azure subscription - [create one for free](https://azure.microsoft.com/free/). You can optionally use the Azure Cloud Shell. If you choose to install and use the CLI locally, this article requires that you're running the latest version of Azure CLI (2.0.70 or later). To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
azure-app-configuration Howto Integrate Azure Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-integrate-azure-managed-service-identity.md
To complete this tutorial, you must have:
:::zone-end ## Add a managed identity
azure-app-configuration Howto Leverage Json Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-leverage-json-content-type.md
In this tutorial, you'll learn how to:
> * Export JSON key-values to a JSON file. > * Consume JSON key-values in your applications. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-bicep.md
This quickstart describes how you can use Bicep to:
- Create key-values in an App Configuration store. - Read key-values in an App Configuration store. ## Prerequisites
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-resource-manager.md
This quickstart describes how to:
> [!TIP] > Feature flags and Key Vault references are special types of key-values. Check out the [Next steps](#next-steps) for examples of creating them using the ARM template. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
azure-app-configuration Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-create-service.md
This sample script creates a new instance of Azure App Configuration using the Azure CLI in a new resource group. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Cli Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-delete-service.md
This sample script deletes an instance of Azure App Configuration using the Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Cli Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-export.md
This sample script exports key-values from an Azure App Configuration store. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Cli Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-import.md
This sample script imports key-value settings to an Azure App Configuration store. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Cli Work With Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-work-with-keys.md
This sample script shows how to:
* Update the value of a newly created key * Delete the new key-value pair [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-app-configuration Powershell Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-create-service.md
This sample script creates a new instance of Azure App Configuration in a new resource group using PowerShell. To execute the sample scripts, you need a functional setup of [Azure PowerShell](/powershell/azure/).
azure-app-configuration Powershell Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/powershell-delete-service.md
This sample script deletes an instance of Azure App Configuration using PowerShell. To execute this sample script, you need a functional setup of [Azure PowerShell](/powershell/azure/).
azure-app-configuration Use Key Vault References Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-dotnet-core.md
In this tutorial, you learn how to:
Before you start this tutorial, install the [.NET SDK 6.0 or later](https://dotnet.microsoft.com/download). ## Create a vault
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
For more information, see [Tutorial: Deploy applications using GitOps with Flux
The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. > [!IMPORTANT]
-> Eventually, a major version update (v2.x.x) for the `microsoft.flux` extension will be released. When this happens, clusters won't be auto-upgraded to this version, since [auto-upgrade is only supported for minor version releases](extensions.md#upgrade-extension-instance). If you're still using an older API version when the next major version is released, you'll need to update your manifests to the latest API versions, perform any necessary testing, then upgrade your extension manually. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
+> The [Flux v2.3.0 release](https://fluxcd.io/blog/2024/05/flux-v2.3.0/) includes API changes to the HelmRelease and HelmChart APIs, with deprecated fields removed. An upcoming minor version update of Microsoft's Flux extension will include these changes, consistent with the upstream OSS Flux project.
+>
+> The [HelmRelease](https://fluxcd.io/flux/components/helm/helmreleases/) kind will be promoted from `v2beta1` to `v2` (GA). The `v2` API is backwards compatible with `v2beta1`, with the exception of these deprecated fields, which will be removed:
+>
+> - `.spec.chart.spec.valuesFile`: replaced by `.spec.chart.spec.valuesFiles`
+> - `.spec.postRenderers.kustomize.patchesJson6902`: replaced by `.spec.postRenderers.kustomize.patches`
+> - `.spec.postRenderers.kustomize.patchesStrategicMerge`: replaced by `.spec.postRenderers.kustomize.patches`
+> - `.status.lastAppliedRevision`: replaced by `.status.history.chartVersion`
+>
+> The [HelmChart](https://fluxcd.io/flux/components/source/helmcharts/) kind will be promoted from `v1beta2` to `v1` (GA). The `v1` API is backwards compatible with `v1beta2`, with the exception of the `.spec.valuesFile` field, which will be replaced by `.spec.valuesFiles`.
+>
+> To avoid issues due to breaking changes, we recommend updating your deployments by July 22, 2024, so that they stop using the fields that will be removed and use the replacement fields instead. These new fields are already available in the current version of the APIs.
> [!NOTE] > When a new version of the `microsoft.flux` extension is released, it may take several days for the new version to become available in all regions.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/overview.md
Once your Kubernetes clusters are connected to Azure, at scale you can:
- [Open Service Mesh](tutorial-arc-enabled-open-service-mesh.md) * Deploy and manage Kubernetes applications targeted for Azure Arc-Enabled Kubernetes clusters from Azure Marketplace.
- [!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
+ [!INCLUDE [azure-lighthouse-supported-service](~/reusable-content/ce-skilling/azure/includes/azure-lighthouse-supported-service.md)]
## Next steps
azure-arc Tutorial Gitops Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-ci-cd.md
In this tutorial, you'll set up a CI/CD solution using GitOps with Azure Arc-ena
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Before you begin
azure-arc Tutorial Gitops Flux2 Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md
In this tutorial, you'll set up a CI/CD solution using GitOps with Flux v2 and A
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
azure-arc Arc Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/arc-gateway.md
+
+ Title: How to simplify network configuration requirements through Azure Arc gateway (Limited preview)
+description: Learn how to simplify network configuration requirements through Azure Arc gateway (Limited preview).
Last updated : 06/26/2024+++
+# Simplify network configuration requirements through Azure Arc gateway (Limited preview)
+
+> [!NOTE]
+> **This is a Limited Public Preview, so customer subscriptions must be allowed by Microsoft to use the feature. To participate, complete the [Azure Arc gateway Limited Public Preview Sign-up form](https://forms.office.com/r/bfTkU2i0Qw).**
+>
+
+If you use enterprise firewalls or proxies to manage outbound traffic, the Azure Arc gateway lets you onboard infrastructure to Azure Arc using only seven (7) endpoints. With Azure Arc gateway, you can:
+
+- Connect to Azure Arc by opening public network access to only seven Fully Qualified Domains (FQDNs).
+- View and audit all traffic an Azure Connected Machine agent sends to Azure via the Arc gateway.
+
+This article explains how to set up and use an Arc gateway Resource.
+
+> [!IMPORTANT]
+> The Arc gateway feature for [Azure Arc-enabled servers](overview.md) is currently in Limited preview in all regions where Azure Arc-enabled servers is present. See the Supplemental Terms of Use for Microsoft Azure Limited previews for legal terms that apply to Azure features that are in beta, limited preview, or otherwise not yet released into general availability.
+>
+
+## Supported scenarios
+
+Azure Arc gateway supports the following scenarios:
+
+- Azure Monitor (Azure Monitor Agent + Dependency Agent) <sup>1</sup>
+- Microsoft Defender for Cloud <sup>2</sup>
+- Windows Admin Center
+- SSH
+- Microsoft Sentinel
+- Azure Update Management
+- Azure Extension for SQL Server
+
+<sup>1</sup> Traffic to Log Analytics workspaces isn't covered by Arc gateway, so the FQDNs for your Log Analytics workspaces must still be allowed in your firewalls or enterprise proxies.
+
+<sup>2</sup> To send Microsoft Defender traffic via Arc gateway, you must configure the extensionΓÇÖs proxy settings.
+
+## How it works
+
+Azure Arc gateway consists of two main components:
+
+**The Arc gateway resource:** An Azure resource that serves as a common front-end for Azure traffic. This gateway resource is served on a specific domain. Once the Arc gateway resource is created, the domain is returned to you in the success response.
+
+**The Arc Proxy:** A new component added to Arc agentry. This component runs as a service called "Azure Arc Proxy" and acts as a forward proxy used by the Azure Arc agents and extensions. No configuration is required on your part for the gateway router. This router is part of Arc core agentry and runs within the context of an Arc-enabled resource.
+
+When the gateway is in place, traffic flows via the following hops: **Arc agentry → Arc Proxy → Enterprise proxy → Arc gateway → Target service**
++
+## Restrictions and limitations
+
+The Arc gateway object has limits you should consider when planning your setup. These limitations apply only to the Limited public preview. These limitations might not apply when the Arc gateway feature is generally available.
+
+- TLS Terminating Proxies aren't supported.
+- ExpressRoute/Site-to-Site VPN used with the Arc gateway (Limited preview) isn't supported.
+- The Arc gateway (Limited preview) is only supported for Azure Arc-enabled servers.
+- There's a limit of five Arc gateway (Limited preview) resources per Azure subscription.
+
+## How to use the Arc gateway (Limited preview)
+
+After completing the [Azure Arc gateway Limited Public Preview Sign-up form](https://forms.office.com/r/bfTkU2i0Qw), your subscription will be allowed to use the feature within 1 business day. You'll receive an email when the Arc gateway (Limited preview) feature has been allowed on the subscription you submitted.
+
+There are six main steps to use the feature:
+
+1. Download the az connected.whl file and use it to install the az connectedmachine extension.
+1. Create an Arc gateway resource.
+1. Ensure the required URLs are allowed in your environment.
+1. Associate new or existing Azure Arc resources with your Arc gateway resource.
+1. Verify that the setup succeeded.
+1. Ensure other scenarios use the Arc gateway (Linux only).
+
+### Step 1: Download the az connectedmachine.whl file
+
+1. Select the link to [download the az connectedmachine.whl file](https://aka.ms/ArcGatewayWhl).
+
+ This file contains the az connected machine commands required to create and manage your gateway Resource.
+
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) (if you haven't already).
+
+1. Execute the following command to add the connectedmachine extension:
+
+ `az extension add --allow-preview true --source [whl file path]`
+
+### Step 2: Create an Arc gateway resource
+
+On a machine with access to Azure, run the following commands to create your Arc gateway resource:
+
+```azurecli
+az login --use-device-code
+az account set --subscription [subscription name or id]
+az connectedmachine gateway create --name [Your gatewayΓÇÖs Name] --resource-group [Your Resource Group] --location [Location] --gateway-type public --allowed-features * --subscription [subscription name or id]
+```
+The gateway creation process takes 9-10 minutes to complete.
+
+### Step 3: Ensure the required URLs are allowed in your environment
+
+When the resource is created, the success response includes the Arc gateway URL. Ensure your Arc gateway URL and all URLs in the following table are allowed in the environment where your Arc resources live:
+
+|URL |Purpose |
+|||
+|[Your URL Prefix].gw.arc.azure.com |Your gateway URL (This URL can be obtained by running `az connectedmachine gateway list` after you create your gateway Resource) |
+|management.azure.com |Azure Resource Manager Endpoint, required for Azure Resource Manager control channel |
+|login.microsoftonline.com |Microsoft Entra IDΓÇÖs endpoint, for acquiring Identity access tokens |
+|gbl.his.arc.azure.com |The cloud service endpoint for communicating with Azure Arc agents |
+|\<region\>.his.arc.azure.com |Used for ArcΓÇÖs core control channel |
+|packages.microsoft.com |Required to acquire Linux based Arc agentry payload, only needed to connect Linux servers to Arc |
+|download.microsoft.com |Used to download the Windows installation package |
+
+### Step 4: Associate new or existing Azure Arc resources with your gateway resource
+
+**To onboard a new server with Arc gateway**, generate an installation script, then edit the script to specify your gateway resource:
+
+1. Generate the installation script.
+ Follow the instructions at [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md) to create a script that automates the downloading and installation of the Azure Connected Machine agent and establishes the connection with Azure Arc.
+
+1. Edit the installation script.
+ Your gateway Resource must be specific in the installation script. To accomplish this, a new parameter called `--gateway-id` is added to the connect command.
+
+ **For Linux servers:**
+
+ 1. Obtain your gateway's Resource ID by running the `az connectedmachine gateway list` command. Note the "id" parameter in the output (that is, the full ARM resource ID).
+ 1. In the installation script, add the "id" found in the previous step as the following parameter: `--gateway-id "[Your-gatewayΓÇÖs-Resource-ID]"`
+
+ Linux server onboarding script example:
+
+ This script template includes parameters for you to specify your enterprise proxy server.
+
+ ```
+ export subscriptionId="SubscriptionId";
+ export resourceGroup="ResourceGroup";
+ export tenantId="TenantID";
+ export location="Region";
+ export authType="AuthType";
+ export cloud="AzureCloud";
+ export gatewayID="gatewayResourceID";
+
+ # Download the installation package
+ output=$(wget https://aka.ms/azcmagent -e use_proxy=yes -e https_proxy="[Your Proxy URL]" -O /tmp/install_linux_azcmagent.sh 2>&1);
+ if [ $? != 0 ]; then wget -qO- -e use_proxy=yes -e https_proxy="[Your Proxy URL]" --method=PUT --body-data="{\"subscriptionId\":\"$subscriptionId\",\"resourceGroup\":\"$resourceGroup\",\"tenantId\":\"$tenantId\",\"location\":\"$location\",\"correlationId\":\"$correlationId\",\"authType\":\"$authType\",\"operation\":\"onboarding\",\"messageType\":\"DownloadScriptFailed\",\"message\":\"$output\"}" "https://gbl.his.arc.azure.com/log" &> || true; fi;
+ echo "$output";
+
+ # Install the hybrid agent
+ bash /tmp/install_linux_azcmagent.sh --proxy "[Your Proxy URL]";
+
+ # Run connect command
+ sudo azcmagent connect --resource-group "$resourceGroup" --tenant-id "$tenantId" --location "$location" --subscription-id "$subscriptionId" --cloud "$cloud" --correlation-id "$correlationId" --gateway-id "$gatewayID";
+ ```
+
+ **For Windows servers:**
+
+ 1. Obtain your gateway's Resource ID by running the `az connectedmachine gateway list` command. This command outputs information about all the gateway resources in your subscription. Note the ID parameter in the output (that is, the full ARM resource ID).
+ 1. In the **try section** of the installation script, add the ID found in the previous step as the following parameter: `--gateway-id "[Your-gatewayΓÇÖs-Resource-ID]"`
+ 1. In the **catch section** of the installation script, add the ID found in the previous step as the following parameter: `gateway-id="[Your-gatewayΓÇÖs-Resource-ID]"`
+
+ Windows server onboarding script example:
+
+ This script template includes parameters for you to specify your enterprise proxy server.
+
+ ```
+ $global:scriptPath = $myinvocation.mycommand.definition
+
+ function Restart-AsAdmin {
+ ΓÇ» ΓÇ» $pwshCommand = "powershell"
+ ΓÇ» ΓÇ» if ($PSVersionTable.PSVersion.Major -ge 6) {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» $pwshCommand = "pwsh"
+ ΓÇ» ΓÇ» }
+
+ ΓÇ» ΓÇ» try {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» Write-Host "This script requires administrator permissions to install the Azure Connected Machine Agent. Attempting to restart script with elevated permissions..."
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» $arguments = "-NoExit -Command `"& '$scriptPath'`""
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» Start-Process $pwshCommand -Verb runAs -ArgumentList $arguments
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» exit 0
+ ΓÇ» ΓÇ» } catch {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» throw "Failed to elevate permissions. Please run this script as Administrator."
+ ΓÇ» ΓÇ» }
+ }
+
+ try {
+ ΓÇ» ΓÇ» if (-not ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» if ([System.Environment]::UserInteractive) {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» Restart-AsAdmin
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» } else {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» throw "This script requires administrator permissions to install the Azure Connected Machine Agent. Please run this script as Administrator."
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ ΓÇ» ΓÇ» }
+
+ ΓÇ» ΓÇ» $env:SUBSCRIPTION_ID = "SubscriptionId";
+ ΓÇ» ΓÇ» $env:RESOURCE_GROUP = "ResourceGroup";
+ ΓÇ» ΓÇ» $env:TENANT_ID = "TenantID";
+ ΓÇ» ΓÇ» $env:LOCATION = "Region";
+ ΓÇ» ΓÇ» $env:AUTH_TYPE = "AuthType";
+ ΓÇ» ΓÇ» $env:CLOUD = "AzureCloud";
+ $env:GATEWAY_ID = "gatewayResourceID";
+
+ ΓÇ» ΓÇ» [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor 3072;
+
+ ΓÇ» ΓÇ» # Download the installation package
+ ΓÇ» ΓÇ» Invoke-WebRequest -UseBasicParsing -Uri "https://aka.ms/azcmagent-windows" -TimeoutSec 30 -OutFile "$env:TEMP\install_windows_azcmagent.ps1" -proxy "[Your Proxy URL]";
+
+ ΓÇ» ΓÇ» # Install the hybrid agent
+ ΓÇ» ΓÇ» & "$env:TEMP\install_windows_azcmagent.ps1" -proxy "[Your Proxy URL]";
+ ΓÇ» ΓÇ» if ($LASTEXITCODE -ne 0) { exit 1; }
+
+ ΓÇ» ΓÇ» # Run connect command
+ ΓÇ» ΓÇ» & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "$env:RESOURCE_GROUP" --tenant-id "$env:TENANT_ID" --location "$env:LOCATION" --subscription-id "$env:SUBSCRIPTION_ID" --cloud "$env:CLOUD" --gateway-id "$env:GATEWAY_ID";
+ }
+ catch {
+ ΓÇ» ΓÇ» $logBody = @{subscriptionId="$env:SUBSCRIPTION_ID";resourceGroup="$env:RESOURCE_GROUP";tenantId="$env:TENANT_ID";location="$env:LOCATION";authType="$env:AUTH_TYPE";gatewayId="$env:GATEWAY_ID";operation="onboarding";messageType=$_.FullyQualifiedErrorId;message="$_";};
+ ΓÇ» ΓÇ» Invoke-WebRequest -UseBasicParsing -Uri "https://gbl.his.arc.azure.com/log" -Method "PUT" -Body ($logBody | ConvertTo-Json) -proxy "[Your Proxy URL]" | out-null;
+ ΓÇ» ΓÇ» Write-Host ΓÇ»-ForegroundColor red $_.Exception;
+ }
+ ```
+
+
+1. Run the installation script to onboard your servers to Azure Arc.
+
+To configure an existing machine to use Arc gateway, follow these steps:
+
+> [!NOTE]
+> The existing machine must be using the Arc-enabled servers connected machine agent version 1.43 or higher to use the Arc gateway Limited Public preview.
+
+1. Associate your existing machine with your Arc gateway resource:
+
+ ```azurecli
+ az connectedmachine setting update --resource-group [res-group] --subscription [subscription name] --base-provider Microsoft.HyrbridCompute --base-resource-type machines --base-resource-name [Arc-server's resource name] --settings-resource-name default --gateway-resource-id [Full Arm resourceid]
+ ```
+
+1. Update the machine to use the Arc gateway resource.
+ Run the following command on the Arc-enabled server to set it to use Arc gateway:
+
+ ```azurecli
+ azcmagent config set connection.type gateway
+ ```
+1. Await reconciliation.
+
+ Once your machines have been updated to use the Arc gateway, some Azure Arc endpoints that were previously allowed in your enterprise proxy or firewalls won't be needed. However, there's a transition period, so allow **1 hour** before removing unneeded endpoints from your firewall/enterprise proxy.
+
+### Step 5: Verify that the setup succeeded
+On the onboarded server, run the following command: `azcmagent show`
+The result should indicate the following values:
+
+- **Agent Status** should show as **Connected**.
+- **Using HTTPS Proxy** should show as **http://localhost:40343**
+- **Upstream Proxy** should show as your enterprise proxy (if you set one)
+
+Additionally, to verify successful set-up, you can run the following command: `azcmagent check`
+The result should indicate that the `connection.type` is set to gateway, and the **Reachable** column should indicate **true** for all URLs.
+
+### Step 6: Ensure additional scenarios use the Arc gateway (Linux only)
+
+On Linux, to use Azure Monitor or Microsoft Defender for Endpoint, additional commands need to be executed to work with the Azure Arc gateway (Limited preview).
+
+For **Azure Monitor**, explicit proxy settings should be provided when deploying Azure Monitor Agent. From Azure Cloud Shell, execute the following commands:
+
+```
+$settings = @{"proxy" = @{mode = "application"; address = "http://127.0.0.1:40343"; auth = false}}
+
+New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settings
+```
+
+If youΓÇÖre deploying Azure Monitor through the Azure portal, be sure to select the **Use Proxy** setting and set the **Proxy Address** to `http://127.0.0.1:40343`.
+
+For **Microsoft Defender for Endpoint**, run the following command:
+
+`mdatp config proxy set --value http://127.0.0.1:40343`
+
+## Cleanup instructions
+
+To clean up your gateway, detach the gateway resource from the applicable server(s); the resource can then be deleted safely:
+
+1. Set the connection type of the Azure Arc-enabled server to "direct" instead of "gateway":
+
+ `azcmagent config set connection.type direct`
+
+1. Run the following command to delete the resource:
+
+ `az connectedmachine gateway delete --resource group [resource group name] --gateway-name [gateway resource name]`
+
+ This operation can take couple of minutes.
+
+## Troubleshooting
+
+You can audit your Arc gatewayΓÇÖs traffic by viewing the gateway RouterΓÇÖs logs.
+
+To view gateway Router logs on **Windows**:
+1. Run `azcmagent logs` in PowerShell.
+1. In the resulting .zip file, the logs are located in the `C:\ProgramData\Microsoft\ArcGatewayRouter` folder.
+
+To view gateway Router logs on **Linux**:
+1. Run `sudo azcmagent logs`.
+1. In the resulting log file, the logs are located in the `/usr/local/arcrtr/logs/` folder.
+
+## Known issues
+
+It's not yet possible to use the Azure CLI to disassociate a gateway Resource from an Arc-enabled server. To make an Arc-enabled server stop using an Arc gateway, use the `azcmagent config set connection.type direct` command. This command configures the Arc-enabled resource to use the direct route instead of the Arc gateway.
+
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/overview.md
To connect hybrid machines to Azure, you install the [Azure Connected Machine ag
You can install the Connected Machine agent manually, or on multiple machines at scale, using the [deployment method](deployment-options.md) that works best for your scenario. > [!NOTE] > For additional guidance regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](../choose-service.md).
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
Other Azure services through Azure Arc-enabled servers are available as well, wi
* [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) - As part of the cloud security posture management (CSPM) pillar, it provides server protections through [Microsoft Defender for Servers](../../defender-for-cloud/plan-defender-for-servers.md) to help protect you from various cyber threats and vulnerabilities. * [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) - Collect security-related events and correlate them with other data sources.-
- >[!NOTE]
- >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
-
+
## Prepare delivery of ESUs Plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) to establish a connection to Azure. Windows Server 2012 Extended Security Updates supports Windows Server 2012 and R2 Standard and Datacenter editions. Windows Server 2012 Storage is not supported.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager. Previously updated : 04/12/2024 Last updated : 06/27/2024 ms.
Arc-enabled System Center VMM allows you to:
- Empower developers and application teams to self-serve VM operations on demand using [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). - Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments. - Discover and onboard existing SCVMM managed VMs to Azure.-- Install the Arc-connected machine agents at scale on SCVMM VMs to [govern, protect, configure, and monitor them](../servers/overview.md#supported-cloud-operations).
+- Install the Azure connected machine agent at scale on SCVMM VMs to [govern, protect, configure, and monitor them](../servers/overview.md#supported-cloud-operations).
> [!NOTE] > For more information regarding the different services Azure Arc offers, see [Choosing the right Azure Arc service for machines](../choose-service.md).
The following image shows the architecture for the Arc-enabled SCVMM:
- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there might, in fact, not even be a host hypervisor in some cases. - Azure Arc-enabled SCVMM is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on an SCVMM VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled SCVMM also provides guest operating system management, in fact, it uses the same components as Azure Arc-enabled servers.
-You have the flexibility to start with either option, or incorporate the other one later without any disruption. With both options, you'll enjoy the same consistent experience.
+You have the flexibility to start with either option, and incorporate the other one later without any disruption. With both options, you'll enjoy the same consistent experience.
### Supported scenarios
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere? description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 04/12/2024 Last updated : 06/27/2024
Arc-enabled VMware vSphere allows you to:
- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control](../../role-based-access-control/overview.md) (RBAC). -- Install the Arc-connected machine agent at scale on VMware VMs to [govern, protect, configure, and monitor](../servers/overview.md#supported-cloud-operations) them.
+- Install the Azure connected machine agent at scale on VMware VMs to [govern, protect, configure, and monitor](../servers/overview.md#supported-cloud-operations) them.
- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.
azure-cache-for-redis Cache Configure Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure-role-based-access-control.md
Managing access to your Azure Cache for Redis instance is critical to ensure tha
Azure Cache for Redis now integrates this ACL functionality with Microsoft Entra ID to allow you to configure your Data Access Policies for your application's service principal and managed identity.
-Azure Cache for Redis offers three built-in access policies: _Owner_, _Contributor_, and _Reader_. If the built-in access policies don't satisfy your data protection and isolation requirements, you can create and use your own custom data access policy as described in [Configure custom data access policy](#configure-a-custom-data-access-policy-for-your-application).
+Azure Cache for Redis offers three built-in access policies: _Data Owner_, _Data Contributor_, and _Data Reader_. If the built-in access policies don't satisfy your data protection and isolation requirements, you can create and use your own custom data access policy as described in [Configure custom data access policy](#configure-a-custom-data-access-policy-for-your-application).
## Scope of availability
azure-cache-for-redis Cache Event Grid Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-cli.md
Azure Event Grid is an eventing service for the cloud. In this quickstart, you'l
Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this quickstart, you'll send events to a web app that will collect and display the messages. When you complete the steps described in this quickstart, you'll see that the event data has been sent to the web app. If you choose to install and use the CLI locally, this quickstart requires that you're running the latest version of Azure CLI (2.0.70 or later). To find the version, run `az --version`. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
azure-cache-for-redis Cache Event Grid Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-portal.md
Azure Event Grid is an eventing service for the cloud. In this quickstart, you'll use the Azure portal to create an Azure Cache for Redis instance, subscribe to events for that instance, trigger an event, and view the results. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this quickstart, you'll send events to a web app that will collect and display the messages. When you're finished, you'll see that the event data has been sent to the web app.
azure-cache-for-redis Cache Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-nodejs-get-started.md
The latest builds of [node_redis](https://github.com/mranney/node_redis) provide
// Connection configuration const cacheConnection = redis.createClient({
- // rediss for TLS
- url: `rediss://${cacheHostName}:6380`,
+ // redis for TLS
+ url: `redis://${cacheHostName}:6380`,
password: cachePassword });
azure-cache-for-redis Cache Redis Cache Arm Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-cache-arm-provision.md
Last updated 04/10/2024
Learn how to create an Azure Resource Manager template (ARM template) that deploys an Azure Cache for Redis. The cache can be used with an existing storage account to keep diagnostic data. You also learn how to define which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this template for your own deployments, or customize it to meet your requirements. Currently, diagnostic settings are shared for all caches in the same region for a subscription. Updating one cache in the region affects all other caches in the region. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
azure-cache-for-redis Cache Redis Cache Bicep Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-cache-bicep-provision.md
Last updated 04/10/2024
Learn how to use Bicep to deploy a cache using Azure Cache for Redis. After you deploy the cache, use it with an existing storage account to keep diagnostic data. Learn how to define which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this Bicep file for your own deployments, or customize it to meet your requirements. ## Prerequisites
azure-cache-for-redis Cache Web App Bicep With Redis Cache Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-bicep-with-redis-cache-provision.md
In this article, you use Bicep to deploy an Azure Web App that uses Azure Cache for Redis, as well as an App Service plan. You can use this Bicep file for your own deployments. The Bicep file provides unique names for the Azure Web App, the App Service plan, and the Azure Cache for Redis. If you'd like, you can customize the Bicep file after you save it to your local device to meet your requirements.
azure-cache-for-redis Cache Web App Cache Aside Leaderboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-web-app-cache-aside-leaderboard.md
In this tutorial, you learn how to:
> * Provision the Azure resources for the application using a Resource Manager template. > * Publish the application to Azure using Visual Studio. ## Prerequisites
azure-cache-for-redis Create Manage Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-cache.md
In this scenario, you learn how to create an Azure Cache for Redis. You then learn to get details of an Azure Cache for Redis instance, including provisioning status, the hostname, ports, and keys for an Azure Cache for Redis instance. Finally, you learn to delete the cache. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
az group delete --resource-group $resourceGroup -y
## Clean up resources ```azurecli az group delete --reourceg $resourceGroup
azure-cache-for-redis Create Manage Premium Cache Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-premium-cache-cluster.md
In this scenario, you learn how to create a 6 GB Premium tier Azure Cache for Redis with clustering enabled and two shards. You then learn to get details of an Azure Cache for Redis instance, including provisioning status, the hostname, ports, and keys for an Azure Cache for Redis instance. Finally, you learn to delete the cache. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
In this scenario, you learn how to create a 6 GB Premium tier Azure Cache for Re
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-compute-fleet Quickstart Create Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-compute-fleet/quickstart-create-rest-api.md
This article steps through using an ARM template to create an Azure Compute Fleet. ## Prerequisites
For more information on assigning roles, seeΓÇ»[assign Azure roles using the Azu
## ARM template ARM templates let you deploy groups of related resources. In a single template, you can create the Virtual Machine Scale Set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create extra scale sets. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
azure-functions Create Resources Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-resources-azure-powershell.md
This article contains the following examples:
[!INCLUDE [azure-powershell-requirements](../../includes/azure-powershell-requirements.md)] ## Create a serverless function app for C#
azure-functions Durable Functions Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-first-csharp.md
To complete this tutorial:
* Make sure that you have version 3.1 or a later version of the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. ## <a name="create-an-azure-functions-project"></a>Create your local project
To complete this tutorial:
* Verify that you have the [Azurite Emulator](../../storage/common//storage-use-azurite.md) installed and running. ## Create a function app project
azure-functions Durable Functions Isolated Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-isolated-create-first-csharp.md
To complete this tutorial:
* Make sure that you have version 3.1 or a later version of the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. ## <a name="create-an-azure-functions-project"></a>Create your local project
To complete this tutorial:
* Verify that you have the [Azurite Emulator](../../storage/common/storage-use-azurite.md) installed and running. ## Create a function app project
azure-functions Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md
To complete this tutorial, you need:
- An Azure Storage account, which requires that you have an Azure subscription. ::: zone pivot="create-option-manual-setup"
azure-functions Quickstart Js Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md
To complete this tutorial:
* Make sure that you have version 18.x+ of [Node.js](https://nodejs.org/) installed. ::: zone-end ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Quickstart Powershell Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-powershell-vscode.md
To complete this tutorial:
* Durable Functions require an Azure storage account. You need an Azure subscription. ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Quickstart Python Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md
To complete this tutorial:
* Make sure that you have version 3.7, 3.8, 3.9, or 3.10 of [Python](https://www.python.org/) installed. ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Quickstart Ts Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md
To complete this tutorial:
::: zone-end * Make sure that you have [TypeScript](https://www.typescriptlang.org/) v4.x+ installed. ## <a name="create-an-azure-functions-project"></a>Create your local project
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The instrumentation key for Application Insights. Don't use both `APPINSIGHTS_IN
Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. Use of `APPLICATIONINSIGHTS_CONNECTION_STRING` is recommended. ## APPLICATIONINSIGHTS_CONNECTION_STRING
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
description: Learn to use the Azure SQL input binding in Azure Functions.
Previously updated : 6/20/2024 Last updated : 6/26/2024 zone_pivot_groups: programming-languages-set-functions
The following example shows a SQL input binding in a function.json file and a Py
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
The following example shows a SQL input binding in a Python function that is [tr
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
The stored procedure `dbo.DeleteToDo` must be created on the database. In this
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
description: Learn to use the Azure SQL output binding in Azure Functions.
Previously updated : 6/20/2024 Last updated : 6/26/2024 zone_pivot_groups: programming-languages-set-functions
The following example shows a SQL output binding in a function.json file and a P
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
CREATE TABLE dbo.RequestLog (
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python from datetime import datetime import json
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
- devx-track-js - devx-track-python - ignite-2023 Previously updated : 6/24/2024 Last updated : 6/26/2024 zone_pivot_groups: programming-languages-set-functions-lang-workers
The following example shows a Python function that is invoked when there are cha
# [v2](#tab/python-v2)
+The following is sample python code for the function_app.py file:
+ ```python import json import logging
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
Title: Register Azure Functions binding extensions description: Learn to register an Azure Functions binding extension based on your environment. Previously updated : 03/19/2022 Last updated : 06/26/2024 # Register Azure Functions binding extensions
The following table indicates when and how you register bindings.
## <a name="extension-bundles"></a>Extension bundles
-By default, extension bundles are used by Java, JavaScript, PowerShell, Python, C# script, and Custom Handler function apps to work with binding extensions. In cases where extension bundles can't be used, you can explicitly install binding extensions with your function app project. Extension bundles are supported for version 2.x and later version of the Functions runtime.
+By default, extension bundles provide binding support for functions in these languages:
+++ Java++ JavaScript++ PowerShell++ Python++ C# script++ Other (custom handlers)+
+In rare cases where extension bundles can't be used, you can explicitly install binding extensions with your function app project. Extension bundles are supported for version 2.x and later version of the Functions runtime.
Extension bundles are a way to add a pre-defined set of compatible binding extensions to your function app. Extension bundles are versioned. Each version contains a specific set of binding extensions that are verified to work together. Select a bundle version based on the extensions that you need in your app.
The following table lists the currently available version ranges of the default
> [!NOTE]
-> Even though host.json supports custom ranges for `version`, you should use a version range value from this table, such as `[4.0.0, 5.0.0)`.
+> Even though host.json supports custom ranges for `version`, you should use a version range value from this table, such as `[4.0.0, 5.0.0)`. For a complete list of extension bundle releases and extension versions in each release, see the [extension bundles release page](https://github.com/Azure/azure-functions-extension-bundles/releases).
## Explicitly install extensions
-For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings).
+For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings). See the [extension bundles release page](https://github.com/Azure/azure-functions-extension-bundles/releases) to review combinations of extension versions that are verified compatible.
For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. For more information, see [func extensions install](functions-core-tools-reference.md#func-extensions-install).
azure-functions Functions Create First Function Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-function-bicep.md
In this article, you use Azure Functions with Bicep to create a function app and
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. After you create the function app, you can deploy Azure Functions project code to that app.
azure-functions Functions Create First Function Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-function-resource-manager.md
In this article, you use Azure Functions with an Azure Resource Manager template
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
azure-functions Functions Create First Java Gradle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-java-gradle.md
To develop functions using Java, you must have the following installed:
- [Azure Functions Core Tools](./functions-run-local.md#v2) version 2.6.666 or above - [Gradle](https://gradle.org/), version 6.8 and above
-You also need an active Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+You also need an active Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
> [!IMPORTANT] > The JAVA_HOME environment variable must be set to the install location of the JDK to complete this quickstart.
azure-functions Functions Create First Quarkus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md
In this article, you'll develop, build, and deploy a serverless Java app to Azur
## Prerequisites * The [Azure CLI](/cli/azure/overview) installed on your own computer.
-* An [Azure account](https://azure.microsoft.com/). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* An [Azure account](https://azure.microsoft.com/). [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* [Java JDK 17](/azure/developer/java/fundamentals/java-support-on-azure) with `JAVA_HOME` configured appropriately. This article was written with Java 17 in mind, but Azure Functions and Quarkus also support older versions of Java. * [Apache Maven 3.8.1+](https://maven.apache.org).
azure-functions Functions Create Function App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-app-portal.md
Please review the [known issues](./recover-python-functions.md#development-issue
## Prerequisites ## Sign in to Azure
azure-functions Functions Create Maven Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-eclipse.md
This article shows you how to create a [serverless](https://azure.microsoft.com/
<!-- TODO ![Access a Hello World function from the command line with cURL](media/functions-create-java-maven/hello-azure.png) --> ## Set up your development environment
azure-functions Functions Create Maven Kotlin Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-kotlin-intellij.md
This article shows you how to create an HTTP-triggered Java function in an IntelliJ IDEA project, run and debug the project in the integrated development environment (IDE), and finally deploy the function project to a function app in Azure. ## Set up your development environment
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
Unless otherwise noted, procedures and examples shown are for Visual Studio 2022
- Visual Studio 2022, including the **Azure development** workload. - Other resources that you need, such as an Azure Storage account, are created in your subscription during the publishing process. -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
## Create an Azure Functions project
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
In this tutorial, you'll:
> * Create and test Java functions that interact with these resources. > * Deploy your functions to Azure and monitor them with Application Insights. ## Prerequisites
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Title: Guidance for developing Azure Functions
description: Learn the Azure Functions concepts and techniques that you need to develop functions in Azure, across all programming languages and bindings. ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e Previously updated : 09/06/2023 Last updated : 06/26/2024 zone_pivot_groups: programming-languages-set-functions
These tools integrate with [Azure Functions Core Tools](./functions-develop-loca
::: zone pivot="programming-language-javascript,programming-language-typescript" Portal editing is only supported for [Node.js version 3](functions-reference-node.md?pivots=nodejs-model-v3), which uses the function.json file. ::: zone-end
-Portal editing is only supported for [Python version 1](functions-reference-python.md?pivots=python-mode-configuration), which uses the function.json file.
## Deployment
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scale.md
Title: Azure Functions scale and hosting
description: Compare the various options you need to consider when choosing a hosting plan in which to run your function app in Azure Functions. ms.assetid: 5b63649c-ec7f-4564-b168-e0a74cb7e0f3 Previously updated : 05/10/2024 Last updated : 06/27/2024 # Azure Functions hosting options
This table shows operating system support for the hosting options.
| **[Dedicated plan]** | ✅ Code-only<br/>✅ Container | ✅ Code-only | | **[Container Apps]** | ✅ Container-only | ❌ Not supported |
-<sup>1</sup> Linux is the only supported operating system for the [Python runtime stack](./functions-reference-python.md).
-<sup>2</sup> Windows deployments are code-only. Functions doesn't currently support Windows containers.
+1. Linux is the only supported operating system for the [Python runtime stack](./functions-reference-python.md).
+2. Windows deployments are code-only. Functions doesn't currently support Windows containers.
[!INCLUDE [Timeout Duration section](../../includes/functions-timeout-duration.md)]
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| **[Flex Consumption plan]** | [Per-function scaling](./flex-consumption-plan.md#per-function-scaling). Event-driven scaling decisions are calculated on a per-function basis, which provides a more deterministic way of scaling the functions in your app. With the exception of HTTP, Blob storage (Event Grid), and Durable Functions, all other function trigger types in your app scale on independent instances. All HTTP triggers in your app scale together as a group on the same instances, as do all Blob storage (Event Grid) triggers. All Durable Functions triggers also share instances and scale together. | Limited only by total memory usage of all instances across a given region. For more information, see [Instance memory](flex-consumption-plan.md#instance-memory). | | **[Premium plan]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding more instances of the Functions host, based on the number of events that its functions are triggered on. | **Windows:** 100<br/>**Linux:** 20-100<sup>2</sup>| | **[Dedicated plan]**<sup>3</sup> | Manual/autoscale |10-30<br/>100 (ASE)|
+| **[Container Apps]** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding more instances of the Functions host, based on the number of events that its functions are triggered on. | 10-300<sup>4</sup> |
-
-<sup>1</sup> During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
-<sup>2</sup> In some regions, Linux apps on a Premium plan can scale to 100 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
-<sup>3</sup> For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
+1. During scale-out, there's currently a limit of 500 instances per subscription per hour for Linux apps on a Consumption plan. <br/>
+2. In some regions, Linux apps on a Premium plan can scale to 100 instances. For more information, see the [Premium plan article](functions-premium-plan.md#region-max-scale-out). <br/>
+3. For specific limits for the various App Service plan options, see the [App Service plan limits](../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits).
+4. On Container Apps, you can set the [maximum number of replicas](../container-apps/scale-app.md#scale-definition), which is honored as long as there's enough cores quota available.
## Cold start behavior | Plan | Details | | -- | -- |
-| **[Consumption plan]** | Apps can scale to zero when idle, meaning some requests might have more latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from prewarmed placeholder functions that already have the function host and language processes running. |
+| **[Consumption plan]** | Apps can scale to zero when idle, meaning some requests might have more latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from prewarmed placeholder functions that already have the host and language processes running. |
| **[Flex Consumption plan]** | Supports [always ready instances](./flex-consumption-plan.md#always-ready-instances) to reduce the delay when provisioning new instances. | | **[Premium plan]** | Supports [always ready instances](./functions-premium-plan.md#always-ready-instances) to avoid cold starts by letting you maintain one or more _perpetually warm_ instances. | | **[Dedicated plan]** | When running in a Dedicated plan, the Functions host can run continuously on a prescribed number of instances, which means that cold start isn't really an issue. |
+| **[Container Apps]** | Depends on the [minimum number of replicas](../container-apps/scale-app.md#scale-definition):<br/> ΓÇó When set to zero: apps can scale to zero when idle and some requests might have more latency at startup.<br/>ΓÇó When set to one or more: the host process runs continuously, which means that cold start isn't an issue. |
## Service limits
Maximum instances are given on a per-function app (Consumption) or per-plan (Pre
| **[Flex Consumption plan]** | Billing is based on number of executions, the memory of instances when they're actively executing functions, plus the cost of any [always ready instances](./flex-consumption-plan.md#always-ready-instances). For more information, see [Flex Consumption plan billing](flex-consumption-plan.md#billing). | **[Premium plan]** | Premium plan is based on the number of core seconds and memory used across needed and prewarmed instances. At least one instance per plan must always be kept warm. This plan provides the most predictable pricing. | | **[Dedicated plan]** | You pay the same for function apps in an App Service Plan as you would for other App Service resources, like web apps.<br/><br/>For an ASE, there's a flat monthly rate that pays for the infrastructure and doesn't change with the size of the environment. There's also a cost per App Service plan vCPU. All apps hosted in an ASE are in the Isolated pricing SKU. For more information, see the [ASE overview article](../app-service/environment/overview.md#pricing). |
+| **[Container Apps]** | Billing in Azure Container Apps is based on your plan type. For more information, see [Billing in Azure Container Apps](../container-apps/billing.md).|
For a direct cost comparison between dynamic hosting plans (Consumption, Flex Consumption, and Premium), see the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/). For pricing of the various Dedicated plan options, see the [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service). For pricing Container Apps hosting, see [Azure Container Apps pricing](https://azure.microsoft.com/pricing/details/container-apps/).
azure-functions Python Memory Profiler Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-memory-profiler-reference.md
Before you start developing a Python function app, you must meet these requireme
* An active Azure subscription. ## Memory profiling process
azure-functions Functions Cli Create App Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-app-service-plan.md
This Azure Functions sample script creates a function app, which is a container for your functions. The function app that is created uses a dedicated App Service plan, which means your server resources are always on. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app, which is a container
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Function App Connect To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md
This Azure Functions sample script creates a function app and connects the function to an Azure Cosmos DB database. It makes the connection using an Azure Cosmos DB endpoint and access key that it adds to app settings. The created app setting that contains the connection can be used with an [Azure Cosmos DB trigger or binding](../functions-bindings-cosmosdb.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app and connects the funct
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Function App Connect To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account.md
This Azure Functions sample script creates a function app and connects the function to an Azure Storage account. The created app setting that contains the storage connection string can be used with a [storage trigger or binding](../functions-bindings-storage-blob.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app and connects the funct
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Function App Github Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-github-continuous.md
This Azure Functions sample script creates a function app using the [Consumption plan](../consumption-plan.md), along with its related resources. The script also configures your function code for continuous deployment from a public GitHub repository. There is also commented out code for using a private GitHub repository. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app using the [Consumption
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-premium-plan.md
This Azure Functions sample script creates a function app, which is a container for your functions. The function app that is created uses a [scalable Premium plan](../functions-premium-plan.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app, which is a container
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Serverless Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-serverless-python.md
This Azure Functions sample script creates a function app, which is a container
>[!NOTE] >The function app created runs on Python version 3.9. Python version 3.7 and 3.8 are also supported by Azure Functions. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app, which is a container
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Create Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-serverless.md
This Azure Functions sample script creates a function app, which is a container for your functions. The function app is created using the [Consumption plan](../consumption-plan.md), which is ideal for event-driven serverless workloads. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app, which is a container
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-functions Functions Cli Mount Files Storage Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-mount-files-storage-linux.md
This Azure Functions sample script creates a function app using the [Consumption
>[!NOTE] >The function app created runs on Python version 3.9. Azure Functions also [supports Python versions 3.7 and 3.8](../functions-reference-python.md#python-version). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This Azure Functions sample script creates a function app using the [Consumption
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 02/05/2023 Last updated : 06/26/2024 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: January 2024*
+*Last updated: June 2024*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) | &#x2705; | &#x2705; | | [Microsoft Entra provisioning service](../../active-directory/app-provisioning/how-provisioning-works.md)| &#x2705; | &#x2705; | | [Microsoft Entra multifactor authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; |
-| [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; |
+| [Azure Health Data Services](../../healthcare-apis/azure-api-for-fhir/index.yml) | &#x2705; | &#x2705; |
| **Service** | **FedRAMP High** | **DoD IL2** | | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | &#x2705; | &#x2705; | | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; |
-| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; |
+| [Microsoft Copilot Studio](/power-virtual-agents/) | &#x2705; | &#x2705; |
| [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | | [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: November 2023*
+*Last updated: June 2024*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md) and [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure OpenAI](../../ai-services/openai/index.yml) | &#x2705; | &#x2705; | | | |
| [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Policy's guest configuration](../../governance/machine-configuration/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Red Hat OpenShift](../../openshift/index.yml) | &#x2705; | &#x2705; | &#x2705; | | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Power BI](/power-bi/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Power BI Embedded](/power-bi/developer/embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Power Data Integrator for Dataverse](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
-| [Power Virtual Agents](/power-virtual-agents/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Microsoft Copilot Studio](/power-virtual-agents/) | &#x2705; | &#x2705; | &#x2705; | | |
| [Private Link](../../private-link/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
This how-to guide helps you use Azure Pipelines to set up continuous integration
[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by development teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government. ## Prerequisites
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md
This article provides developer guidance for using Computer Vision, Face API, Te
## Prerequisites - Install and Configure [Azure PowerShell](/powershell/azure/install-azure-powershell) - Connect [PowerShell with Azure Government](documentation-government-get-started-connect-with-ps.md)
azure-government Documentation Government Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-extension.md
Last updated 08/31/2021
Azure [virtual machine (VM) extensions](../virtual-machines/extensions/features-windows.md) are small applications that provide post-deployment configuration and automation tasks on Azure VMs. ## Virtual machine extensions
azure-government Documentation Government Get Started Connect With Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-with-ps.md
Microsoft Azure Government delivers a dedicated cloud with world-class security
This quickstart shows how to use PowerShell to access and start managing resources in Azure Government. If you don't have an Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin. ## Prerequisites
azure-government Documentation Government Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-image-gallery.md
Last updated 08/31/2021
Microsoft Azure Government Marketplace provides a similar experience as Azure Marketplace. You can choose to deploy prebuilt images from Microsoft and our partners, or upload your own VHDs. This approach gives you the flexibility to deploy your own standardized images if needed. ## Images
azure-government Documentation Government Manage Oms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-manage-oms.md
Setting up this kind of environment can be challenging. Onboarding your fleet of
## Azure Monitor logs Azure Monitor logs, now available in Azure Government, uses hyperscale log search to quickly analyze your data and expose threats in your environment. This article focuses on using Azure Monitor logs that uses hyperscale log search to quickly analyze your data and expose threats in your environment. Azure Monitor logs can:
azure-linux Quickstart Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-cli.md
Get started with the Azure Linux Container Host by using the Azure CLI to deploy
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Use the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart - Bash](/azure/cloud-shell/quickstart). :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
azure-linux Quickstart Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-powershell.md
Get started with the Azure Linux Container Host by using Azure PowerShell to dep
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Use the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart](/azure/cloud-shell/quickstart). :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com"::: - If you're running PowerShell locally, install the `Az PowerShell` module and connect to your Azure account using the [`Connect-AzAccount`](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
azure-linux Quickstart Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-resource-manager-template.md
Last updated 04/18/2023
Get started with the Azure Linux Container Host by using an Azure Resource Manager (ARM) template to deploy an Azure Linux Container Host cluster. After installing the prerequisites, you'll create a SSH key pair, review the template, deploy the template and validate it, and then deploy an application. ## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Use the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart - Bash](/azure/cloud-shell/quickstart). :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
azure-linux Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-terraform.md
Get started with the Azure Linux Container Host using Terraform to deploy an Azu
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you haven't already configured Terraform, you can do so using one of the following options: - [Azure Cloud Shell with Bash](/azure/developer/terraform/get-started-cloud-shell-bash?tabs=bash)
azure-linux Tutorial Azure Linux Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-create-cluster.md
In later tutorials, you'll learn how to add an Azure Linux node pool to an exist
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). ## 1 - Install the Kubernetes CLI
azure-maps How To Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md
You can create your Azure Maps account using an Azure Resource Manager (ARM) template. After you have an account, you can implement the APIs in your website or mobile application. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
azure-maps Migrate Get Static Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-static-map.md
+
+ Title: Migrate Bing Maps Get a Static Map API to Azure Maps Get Map Static Image API
+
+description: Learn how to Migrate the Bing Maps Get a Static Map API to the Azure Maps Get Map Static Image API.
++ Last updated : 06/26/2024+++++
+# Migrate Bing Maps Get a Static Map API
+
+This article explains how to migrate the Bing Maps [Get a Static Map] API to the Azure Maps [Get Map Static Image] API. Azure Maps Get Map Static Image API renders a user-defined, rectangular Road, Satellite/Aerial, or Traffic style map image.
+
+## Prerequisites
+
+- An [Azure Account]
+- An [Azure Maps account]
+- A [subscription key] or other form of [Authentication with Azure Maps]
+
+## Notable differences
+
+- Bing Maps Get a Static Map API offers Road, Satellite/Aerial, Traffic, Streetside, Birds Eye and Ordnance Survey maps styles. Azure Maps Get Map Static Image API offers the same styles except for Streetside, Birds Eye and Ordnance Survey.
+- Bing Maps Get a Static Map API supports getting a static map using coordinates, street address or place name as the location input. Azure Maps Get Map Static Image API supports only coordinates as the location input.
+- Bing Maps Get a Static Map API supports getting a static map of a driving, walking, or transit route natively. Azure Maps Get Map Static Image API doesn't provide route map functionality natively.
+- Bing Maps Get a Static Map API provides static maps in PNG, JPEG and GIF image formats. Azure Maps Get Map Static Image API provides static maps in PNG and JPEG image formats.
+- Bing Maps Get a Static Map API supports XML and JSON response formats. Azure Maps Get Map Static Image API supports only JSON response format.
+- Bing Maps Get a Static Map API supports HTTP GET and POST requests. Azure Maps Get Map Static Image API supports HTTP GET requests.
+- Bing Maps Get a Static Map API uses coordinates in the latitude & longitude format. Azure Maps Get Map Static Image API uses coordinates in the longitude & latitude format, as defined in [GeoJSON].
+- Unlike Bing Maps for Enterprise, Azure Maps is a global service that supports specifying a geographic scope, which allows you to limit data residency to the European (EU) or United States (US) geographic areas (geos). All requests (including input data) are processed exclusively in the specified geographic area. For more information, see [Azure Maps service geographic scope].
+
+## Security and authentication
+
+Bing Maps for Enterprise only supports API key authentication. Azure Maps supports multiple ways to authenticate your API calls, such as a [subscription key](azure-maps-authentication.md#shared-key-authentication), [Microsoft Entra ID], and [Shared Access Signature (SAS) Token]. For more information on security and authentication in Azure Maps, See [Authentication with Azure Maps] and the [Security] section in the Azure Maps Get Map Static Image documentation.
+
+## Request parameters
+
+The following table lists the Bing Maps _Get a Static Map_ request parameters and the Azure Maps equivalent:
+
+| Bing Maps request parameter| Parameter Alias  | Azure Maps request parameter | Required in Azure Maps  | Azure Maps data type  | Description  |
+|-|||-|--|--|
+| centerPoint | | center | True (if not using bbox) | number[] | Bing Maps Get a Static Map API requires coordinates be in latitude & longitude format, whereas Azure Maps Get Map Static Image API requires longitude & latitude format, as defined in the [GeoJSON] format. <br><br>`longitude,latitude` range from [-90, 90]​. Note: Either `center` or `bbox` are required parameters. They're mutually exclusive. |
+| culture | c | language | FALSE | String | In Azure Maps Get Map Static Image API, this is the language in which search results should be returned and is specified in the Azure Maps [request header]. For more information, see [Supported Languages]. |
+| declutterPins | dcl | Not supported   | Not supported | Not supported | |
+| dpi | dir | Not supported | Not supported | Not supported | |
+| drawCurve | dv | Path | FALSE | String | |
+| fieldOfView | fov | Not supported | Not supported | Not supported | In Bing Maps, this parameter is used for `imagerySet` Birdseye, `BirdseyeWithLabels`, `BirdseyeV2`, `BirdseyeV2WithLabels`, `OrdnanceSurvey`, `Streetside`. Azure Maps doesn't support these maps styles. |
+| format | fmt | format | TRUE | String | Bing Maps Get a Static Map API provides static maps in PNG, JPEG and GIF image formats. Azure Maps Get Map Static Image API provides static maps in PNG and JPEG image formats. |
+| heading | | Not supported | Not supported | Not supported | In Bing Maps, this parameter is used for imagerySet Birdseye, BirdseyeWithLabels, BirdseyeV2, BirdseyeV2WithLabels, OrdnanceSurvey, Streetside. Azure Maps doesn't support these maps styles. |
+| highlightEntity | he | Not supported | Not supported | Not supported | In Bing Maps Get a Static Map API, this parameter is used to get a polygon of the location input (entity) displayed on the map natively. Azure Maps Get a Map Static Image API doesn't support this feature, however, you can get a polygon of a location (locality) from the Azure Maps [Get Polygon] API and then display that on the static map. |
+| imagerySet | | tilesetID | TRUE | [TilesetId] | |
+| mapArea | ma | bbox | True (if not using center) | number[] | A bounding box, defined by two longitudes and two latitudes, represents the four sides of a rectangular area on the Earth, in the format of `minLon, minLat, maxLon, maxLat`. <br><br>Note: Either `center` or `bbox` are required parameters. They're mutually exclusive. `bbox` shouldn’t be used with `height` or `width`. |
+| mapLayer | ml | trafficLayer | FALSE | TrafficTilesetId | Optional. If `TrafficLayer` is provided, it returns map image with corresponding traffic layer. For more information, see [tilesetId]. |
+| mapSize | ms | height | TRUE | integer int32 | |
+| | | width | | | |
+| mapMetadata | mmd | Not supported | Not supported | Not supported | |
+| orientation | dir | Not supported | Not supported | Not supported | In Bing Maps Get a Static Map API, this parameter is used for 'imagerySet' Birdseye, BirdseyeWithLabels, BirdseyeV2, BirdseyeV2WithLabels, OrdnanceSurvey, Streetside. Azure Maps doesn't support these maps styles |
+| pitch | | Not supported | Not supported | Not supported | In Bing Maps Get a Static Map API, this parameter is used for 'imagerySet' Birdseye, BirdseyeWithLabels, BirdseyeV2, BirdseyeV2WithLabels, OrdnanceSurvey, Streetside. Azure Maps doesn't support these maps styles |
+| pushpin | pp | pins | FALSE | String | In Bing Maps Get a Static Map API, an HTTP GET request is limited to 18 pins and an HTTP POST request is limited to 100 pins per static map. Azure Maps Get Map Static Image API HTTP GET request doesnΓÇÖt have a limit on the number of pins per static map. However, the number of pins supported on the static map is based on the maximum number of characters supported in the HTTP GET request. See Azure Maps Get Map Static Image API ΓÇÿpinsΓÇÖ parameter in [URI Parameters] for more details on pushpin support. |
+| query | | Not supported | Not supported | Not supported | Azure Maps Get Map Static Image API supports only coordinates as the location input, not street address or place name. Use the Azure Maps Get Geocoding API to convert a street address or place name to coordinates. |
+| Route Parameters: avoid | None | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: distanceBeforeFirstTurn | dbft | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: dateTime | dt | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: maxSolutions | maxSolns | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: optimize | optmz | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: timeType | tt | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: travelMode | None | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| Route Parameters: waypoint.n | wp.n | Not supported | Not supported | Not supported | Azure Maps Get Maps Static Image API doesnΓÇÖt provide route map functionality natively. To get a static map with a route path on it, you can use the Azure Maps [Get Route Directions] or [Post Route Directions] API to get route path coordinates of a given route and then use the Azure Maps [Get Map Static Image] API `drawCurve` feature to overlay the route path coordinates on the static map. |
+| style | st | Not supported | Not supported | Not supported | |
+| userRegion | ur | view | FALSE | String | A string that represents an [ISO 3166-1 Alpha-2 region/country code]. This alters geopolitical disputed borders and labels to align with the specified user region. By default, the View parameter is set to “Auto” even if not defined in the request. For more information, see [Supported Views]. |
+| zoomLevel | | Zoom | FALSE | String | Desired zoom level of the map. Zoom value must be in the range: 0-20 (inclusive). Default value is 12. |
+| highlightEntity | he | Not supported | Not supported | Not supported | In Bing Maps Get a Static Map API, this parameter is used to get a polygon of the location input (entity) displayed on the map natively. Azure Maps Get a Map Static Image API doesn't support this feature, however, you can get a polygon of a location (locality) from the Azure Maps [Get Polygon] API and then display that on the static map. |
+
+For more information about the Azure Maps Get Map Static Image API request parameters, see [URI Parameters].
+
+## Request examples
+
+Bing Maps _Get a Static Map_ API sample GET request:
+
+``` http
+https://dev.virtualearth.net/REST/v1/Imagery/Map/Road/51.504810,-0.113629/15?mapSize=500,500&pp=51.504810,-0.113629;45&key={BingMapsKey}
+```
+
+Azure Maps _Get Map Static Image_ API sample GET request:
+
+``` http
+https://atlas.microsoft.com/map/static?api-version=2024-04-01&tilesetId=microsoft.base.road&zoom=15&center=-0.113629,51.504810&subscription-key={Your-Azure-Maps-Subscription-key}
+```
+
+## Response examples
+
+The following screenshot shows what is returned in the body of the HTTP response when executing the Bing Maps _Get a Static Map_ request:
++
+The following JSON sample shows what is returned in the body of the HTTP response when executing an Azure Maps _Get Map Static Image_ request:
++
+## Transactions usage
+
+Like Bing Maps Get a Static Map API, Azure Maps Get Map Static Image API logs one billable transaction per request. For more information on Azure Maps transactions, see [Understanding Azure Maps Transactions].
+
+## Additional information
+
+- [Render custom data on a raster map]
+
+Support
+
+- [Microsoft Q&A Forum]
+
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[Azure Account]: https://azure.microsoft.com/
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps service geographic scope]: geographic-scope.md
+[GeoJSON]: https://geojson.org
+[Get a Static Map]: /bingmaps/rest-services/imagery/get-a-static-map
+[Get Map Static Image]: /rest/api/maps/render/get-map-static-image
+[Get Polygon]: /rest/api/maps/search/get-polygon
+[Get Route Directions]: /rest/api/maps/route/get-route-directions
+[ISO 3166-1 Alpha-2 region/country code]: https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2
+[Microsoft Entra ID]: azure-maps-authentication.md#microsoft-entra-authentication
+[Microsoft Q&A Forum]: /answers
+[Post Route Directions]: /rest/api/maps/route/post-route-directions
+[Render custom data on a raster map]: how-to-render-custom-data.md
+[request header]: /rest/api/maps/render/get-map-static-image?#request-headers
+[Security]: /rest/api/maps/render/get-map-static-image#security
+[Shared Access Signature (SAS) Token]: azure-maps-authentication.md#shared-access-signature-token-authentication
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Supported Languages]: supported-languages.md
+[Supported Views]: supported-languages.md#azure-maps-supported-views
+[TilesetId]: /rest/api/maps/render/get-map-static-image#tilesetid
+[Understanding Azure Maps Transactions]: understanding-azure-maps-transactions.md
+[URI Parameters]: /rest/api/maps/render/get-map-static-image#uri-parameters
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Use the Windows agent.
Perform the following steps to configure the Log Analytics agent for Windows to report to a System Center Operations Manager management group. 1. Sign on to the computer with an account that has administrative rights.
Perform the following steps to configure the Log Analytics agent for Windows to
Perform the following steps to configure the Log Analytics agent for Linux to report to a System Center Operations Manager management group. 1. Edit the file `/etc/opt/omi/conf/omiserver.conf`.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
- JSON text must be contained in a single row for proper ingestion. The JSON body (file) format is not supported. - Optionally a Data Collection Endpoint if you plan to use Azure Monitor Private Links. The data collection endpoint must be in the same region as the Log Analytics workspace. For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
- For more information, see [How to set up data collection endpoints based on your deployment](../essentials/data-collection-endpoint-overview.md#how-to-set-up-data-collection-endpoints-based-on-your-deployment).
-- - A Virtual Machine, Virtual Machine Scale Set, Arc-enabled server on-premises or Azure Monitoring Agent on a Windows on-premises client that writes logs to a text or JSON file.
azure-monitor Data Sources Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-collectd.md
A full list of available plugins can be found at [Table of Plugins](https://coll
The following CollectD configuration is included in the Log Analytics agent for Linux to route CollectD data to the Log Analytics agent for Linux. ```xml LoadPlugin write_http
azure-monitor Data Sources Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-json.md
# Collecting custom JSON data sources with the Log Analytics agent for Linux in Azure Monitor Custom JSON data sources can be collected into [Azure Monitor](../data-platform.md) using the Log Analytics agent for Linux. These custom data sources can be simple scripts returning JSON such as [curl](https://curl.haxx.se/) or one of [FluentD's 300+ plugins](https://www.fluentd.org/plugins/all). This article describes the configuration required for this data collection.
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/vmext-troubleshoot.md
If the Microsoft Monitoring Agent VM extension isn't installing or reporting, pe
For more information, see [Troubleshooting Windows extensions](../../virtual-machines/extensions/oms-windows.md). ## Troubleshoot the Linux VM extension If the Log Analytics agent for Linux VM extension isn't installing or reporting, perform the following steps to troubleshoot the issue: 1. If the extension status is **Unknown**, check if the Azure VM agent is installed and working correctly by reviewing the VM agent log file `/var/log/waagent.log`.
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
The current alert rule wizard is different from the earlier experience:
## Manage log search alerts using PowerShell Use the following PowerShell cmdlets to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules):
azure-monitor Alerts Metric Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md
# Create a metric alert in Azure Monitor Logs You can use metric alert capabilities on a predefined set of logs in Azure Monitor Logs. The monitored logs, which can be collected from Azure or on-premises computers, are converted to metrics and then monitored with metric alert rules, just like any other metric.
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
Log search alerts can measure two different things, which can be used for differ
- **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. - **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage.
-You can configure if log search alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview.
+You can configure if log search alerts are [stateful or stateless](alerts-overview.md#alerts-and-state).
Note that stateful log search alerts have these limitations: - they can trigger up to 300 alerts per evaluation. - you can have a maximum of 5000 alerts with the `fired` alert condition.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Insert a few lines of code in your application to find out what users are doing with it, or to help diagnose issues. You can send telemetry from device and desktop apps, web clients, and web servers. Use the [Application Insights](./app-insights-overview.md) core telemetry API to send custom events and metrics and your own versions of standard telemetry. This API is the same API that the standard Application Insights data collectors use. ## API summary
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
We've also provided manual download instructions in case you don't have internet
To get started, you need a connection string. For more information, see [Connection strings](sdk-connection-string.md). ### Run PowerShell as Admin with an elevated execution policy
This tab describes the following cmdlets, which are members of the [Az.Applicati
> - To get started, you need a connection string. For more information, see [Create a resource](create-workspace-resource.md). > - This cmdlet requires that you review and accept our license and privacy statement. > [!IMPORTANT] > This cmdlet requires a PowerShell session with Admin permissions and an elevated execution policy. For more information, see [Run PowerShell as administrator with an elevated execution policy](?tabs=detailed-instructions#run-powershell-as-admin-with-an-elevated-execution-policy).
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
We use an [MVC application](/aspnet/core/tutorials/first-mvc-app) example. If yo
An [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. For more information, see [OpenTelemetry overview](opentelemetry-overview.md). > [!NOTE] > If you want to use standalone ILogger provider, use [Microsoft.Extensions.Logging.ApplicationInsight](./ilogger.md).
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
Send diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILog
> > ## Install logging on your app
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
This procedure configures your ASP.NET web app to send telemetry to the [Applica
[!INCLUDE [azure-monitor-app-insights-otel-available-notification](../includes/azure-monitor-app-insights-otel-available-notification.md)] ## Prerequisites To add Application Insights to your ASP.NET website, you need to:
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core-based web applications running on [Azure App Service](../../app-service/index.yml) is now easier than ever. Previously, you needed to manually instrument your app. Now, the latest extension/agent is built into the App Service image by default. This article walks you through enabling Azure Monitor Application Insights monitoring. It also provides preliminary guidance for automating the process for large-scale deployments. ## Enable autoinstrumentation monitoring
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
Enabling monitoring on your ASP.NET-based web applications running on [Azure App
If both autoinstrumentation monitoring and manual SDK-based instrumentation are detected, only the manual instrumentation settings will be honored. This arrangement prevents duplicate data from being sent. To learn more, see the [Troubleshooting section](#troubleshooting). ## Enable autoinstrumentation monitoring
azure-monitor Configuration With Applicationinsights Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md
See connection string [code samples](sdk-connection-string.md#code-samples).
## InstrumentationKey This setting determines the Application Insights resource in which your data appears. Typically, you create a separate resource, with a separate key, for each of your applications.
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
You can group requests by logical `name` and define the `source` of this request
Request telemetry supports the standard extensibility model by using custom `properties` and `measurements`. ### Name
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
The following key properties are captured by default when the plug-in is enabled
Users can set up the Click Analytics Auto-Collection plug-in via JavaScript (Web) SDK Loader Script or npm and then optionally add a framework extension. ### Add the code
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
npm install @microsoft/applicationinsights-angularplugin-js
### Add the extension to your code #### [React](#tab/react)
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Live metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
* [ASP.NET Core](opentelemetry-enable.md?tabs=aspnetcore): Enabled by default. * [Java](./opentelemetry-enable.md?tabs=java): Enabled by default. * [Node.js](opentelemetry-enable.md?tabs=nodejs): Enabled by default.
- * [Python](opentelemetry-enable.md?tabs=python): Enabled by default.
+ * [Python](opentelemetry-enable.md?tabs=python): Pass `enable_live_metrics=True` into `configure_azure_monitor`. See the [Azure Monitor OpenTelemetry Distro](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry#usage) documentation for more information.
# [Classic API](#tab/classic)
Live metrics are currently supported for ASP.NET, ASP.NET Core, Azure Functions,
-2. Open the Application Insigwhts resource for your application in the [Azure portal](https://portal.azure.com). Select **Live metrics**, which is listed under **Investigate** in the left hand menu.
+2. Open the Application Insights resource for your application in the [Azure portal](https://portal.azure.com). Select **Live metrics**, which is listed under **Investigate** in the left hand menu.
3. [Secure the control channel](#secure-the-control-channel) if you might use sensitive data like customer names in your filters. ## How do live metrics differ from metrics explorer and Log Analytics?
It's possible to try custom filters without having to set up an authenticated ch
| Azure Functions v2 | Supported | Supported | Supported | Supported | **Not supported** | | Java | Supported (V2.0.0+) | Supported (V2.0.0+) | **Not supported** | Supported (V3.2.0+) | **Not supported** | | Node.js | Supported (V1.3.0+) | Supported (V1.3.0+) | **Not supported** | Supported (V1.3.0+) | **Not supported** |
-| Python | **Not supported** | **Not supported** | **Not supported** | **Not supported** | **Not supported** |
+| Python | Supported (Distro Version 1.6.0+) | **Not supported** | **Not supported** | **Not supported** | **Not supported** |
Basic metrics include request, dependency, and exception rate. Performance metrics (performance counters) include memory and CPU. Sample telemetry shows a stream of detailed information for failed requests and dependencies, exceptions, events, and traces.
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Application Insights collects log, performance, and error data and automatically
The required Application Insights instrumentation is built into Azure Functions. All you need is a valid connection string to connect your function app to an Application Insights resource. The connection string should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have a connection string, you can set it manually. For more information, see [Monitor executions in Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd) and [Connection strings](sdk-connection-string.md). For a list of supported autoinstrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
Before you begin, make sure that you have an Azure subscription, or [get a new o
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Create an [Application Insights resource](create-workspace-resource.md). ### <a name="sdk"></a> Set up the Node.js client library
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Point the Java virtual machine (JVM) to the jar file by adding `-javaagent:"path
> If you develop a Spring Boot application, you can optionally replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
-##### [Java-Native](#tab/java-native)
+##### [Java Native](#tab/java-native)
Several automatic instrumentations are enabled through configuration changes; no code changes are required
azure-monitor Opentelemetry Nodejs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-nodejs-migrate.md
This guide provides two options to upgrade from the Azure Monitor Application In
Remove all Application Insights instrumentation from your code. Delete any sections where the Application Insights client is initialized, modified, or called. 4. Enable Application Insights with the Azure Monitor OpenTelemetry Distro.-
+ > [!IMPORTANT]
+ > *Before* you import anything else, `useAzureMonitor` must be called. There might be telemetry loss if other libraries are imported first.
Follow [getting started](opentelemetry-enable.md?tabs=nodejs) to onboard to the Azure Monitor OpenTelemetry Distro. #### Azure Monitor OpenTelemetry Distro changes and limitations
-The APIs from the Application Insights SDK 2.X aren't available in the Azure Monitor OpenTelemetry Distro. You can access these APIs through a nonbreaking upgrade path in the Application Insights SDK 3.X.
+ * The APIs from the Application Insights SDK 2.X aren't available in the Azure Monitor OpenTelemetry Distro. You can access these APIs through a nonbreaking upgrade path in the Application Insights SDK 3.X.
+ * Filtering dependencies, logs, and exceptions by operation name is not yet supported.
## [Upgrade](#tab/upgrade)
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
# Manage Application Insights resources by using PowerShell This article shows you how to automate the creation and update of [Application Insights](./app-insights-overview.md) resources automatically by using Azure Resource Manager. You might, for example, do so as part of a build process. Along with the basic Application Insights resource, you can create [availability web tests](./availability-overview.md), set up [alerts](../alerts/alerts-log.md), set the [pricing scheme](../logs/cost-logs.md#application-insights-billing), and create other Azure resources.
More properties are available via the cmdlets:
See the [detailed documentation](/powershell/module/az.applicationinsights) for the parameters for these cmdlets. ## Set the data retention
azure-monitor Sampling Classic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling-classic-api.md
Insert a line like `samplingPercentage: 10,` before the instrumentation key:
appInsights.trackPageView(); </script> ``` For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Connection strings define where to send telemetry data.
Key-value pairs provide an easy way for users to define a prefix suffix combination for each Application Insights service or product. ## Scenario overview
azure-monitor Statsbeat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/statsbeat.md
Statsbeat supports EU Data Boundary for Application Insights resources in the fo
|Throttle Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Status Code`| |Exception Count|Count| `Resource Provider`, `Attach Type`, `Instrumentation Key`, `Runtime Version`, `Operating System`, `Language`, `Version`, `Endpoint`, `Host`, `Exception Type`| #### Attach Statsbeat
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
The best experience is obtained by installing Application Insights both in your
1. **Webpage code:** Use the JavaScript SDK to collect data from webpages. See [Get started with the JavaScript SDK](./javascript-sdk.md).
- [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](~/reusable-content/ce-skilling/azure/includes/azure-monitor-instrumentation-key-deprecation.md)]
To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
The [Application Insights SDK for Worker Service](https://www.nuget.org/packages
You must have a valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Connection Strings](./sdk-connection-string.md). ## Use Application Insights SDK for Worker Service
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Container insights support viewing metrics stored in your Log Analytics workspac
### Why are log lines larger than 16 KB split into multiple records in Log Analytics?
-The agent uses the [Docker JSON file logging driver](https://docs.docker.com/config/containers/logging/json-file/) to capture the stdout and stderr of containers. This logging driver splits log lines [larger than 16 KB](https://github.com/moby/moby/pull/22982) into multiple lines when they're copied from stdout or stderr to a file.
+The agent uses the [Docker JSON file logging driver](https://docs.docker.com/config/containers/logging/json-file/) to capture the stdout and stderr of containers. This logging driver splits log lines [larger than 16 KB](https://github.com/moby/moby/pull/22982) into multiple lines when they're copied from stdout or stderr to a file. Use [Multi-line logging](./container-insights-logs-schema.md#multi-line-logging-in-container-insights) to get log record size up to 64KB.
## Next steps
azure-monitor Collect Custom Metrics Guestos Resource Manager Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vmss.md
Last updated 07/30/2023
# Send guest OS metrics to the Azure Monitor metric store by using an Azure Resource Manager template for a Windows virtual machine scale set By using the Azure Monitor [Azure Diagnostics extension for Windows (WAD)](../agents/diagnostics-extension-overview.md), you can collect metrics and logs from the guest operating system (guest OS) that runs as part of a virtual machine, cloud service, or Azure Service Fabric cluster. The extension can send telemetry to many different locations listed in the previously linked article.
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
Last updated 05/31/2024
# Send Guest OS metrics to the Azure Monitor metrics database for a Windows virtual machine (classic) The Azure Monitor [Diagnostics extension](../agents/diagnostics-extension-overview.md) (known as "WAD" or "Diagnostics") allows you to collect metrics and logs from the guest operating system (Guest OS) running as part of a virtual machine, cloud service, or Service Fabric cluster. The extension can send telemetry to [many different locations.](../data-platform.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
azure-monitor Collect Custom Metrics Guestos Vm Cloud Service Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md
Last updated 05/31/2024
# Send Guest OS metrics to the Azure Monitor metric store classic Cloud Services With the Azure Monitor [Diagnostics extension](../agents/diagnostics-extension-overview.md), you can collect metrics and logs from the guest operating system (Guest OS) running as part of a virtual machine, cloud service, or Service Fabric cluster. The extension can send telemetry to [many different locations.](../data-platform.md?toc=%2fazure%2fazure-monitor%2ftoc.json)
azure-monitor Migrate To Batch Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-batch-api.md
description: How to migrate from the metrics API to the getBatch API
Previously updated : 03/11/2024 Last updated : 06/27/2024
In the `metrics:getBatch` error response, the error content is wrapped inside a
- Another common cause is specifying a filter that doesn't match any resources. For example, if the filter specifies a dimension value that doesn't exist on any resources in the subscription and region combination, `"timeseries": []` is returned. + Wildcard filters
- Using a wildcard filter such as `Microsoft.ResourceId eq '*'` causes the API to return a time series for every resourceId in the subscription and region. If the subscription and region combination contains no resources, an empty time series is returned. The same query without the wildcard filter would return a single time series, aggregating the requested metric over the requested dimensions, for example subscription and region. If there are no resources in the subscription and region combination, the API returns a single time series with a single data point of `0`.
-
+ Using a wildcard filter such as `Microsoft.ResourceId eq '*'` causes the API to return a time series for every resourceId in the subscription and region. If the subscription and region combination contains no resources, an empty time series is returned. The same query without the wildcard filter would return a single time series, aggregating the requested metric over the requested dimensions, for example subscription and region.
+ Custom metrics aren't currently supported. The `metrics:getBatch` API doesn't support querying custom metrics, or queries where the metric namespace name isn't a resource type. This is the case for VM Guest OS metrics that use the namespace "azure.vm.windows.guestmetrics" or "azure.vm.linux.guestmetrics".
azure-monitor Rest Api Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/rest-api-walkthrough.md
Title: Azure monitoring REST API walkthrough
description: How to authenticate requests and use the Azure Monitor REST API to retrieve available metric definitions, metric values, and activity logs. Previously updated : 03/11/2024 Last updated : 06/27/2024
After retrieving the metric definitions and dimension values, retrieve the metri
Use the metric's `name.value` element in the filter definitions. If no dimension filters are specified, the rolled up, aggregated metric is returned.
-To fetch multiple time series with specific dimension values, specify a filter query parameter that specifies both dimension values such as `"&$filter=ApiName eq 'ListContainers' or ApiName eq 'GetBlobServiceProperties'"`.
-
-To return a time series for every value of a given dimension, use an `*` filter such as `"&$filter=ApiName eq '*'"`. The `Top` and `OrderBy` query parameters can be used to limit and order the number of time series returned.
+### Multiple time series
+A time series is a set of data points that are ordered by time for a given combination of dimensions. A dimension is an aspect of the metric that describes the data point such as resource Id, region, or ApiName.
++ To fetch multiple time series with specific dimension values, specify a filter query parameter that specifies both dimension values such as `"&$filter=ApiName eq 'ListContainers' or ApiName eq 'GetBlobServiceProperties'"`. In this example, you get a time series where `ApiName` is `ListContainers` and a second time series where `ApiName` is `GetBlobServiceProperties`.++ To return a time series for every value of a given dimension, use an `*` filter such as `"&$filter=ApiName eq '*'"`. Use the `Top` and `OrderBy` query parameters to limit and sort the number of time series returned. In this example, you get a time series for every value of `ApiName`in the result set. If no data is returned, the API returns an empty time series `"timeseries": []`. > [!NOTE] > To retrieve multi-dimensional metric values using the Azure Monitor REST API, use the API version "2019-07-01" or later.
Below is an equivalent metrics request for multiple resources:
GET https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef012345/providers/microsoft.Insights/metrics?timespan=2023-06-25T22:20:00.000Z/2023-06-26T22:25:00.000Z&interval=PT5M&metricnames=Percentage CPU&aggregation=average&api-version=2021-05-01&region=eastus&metricNamespace=microsoft.compute/virtualmachines&$filter=Microsoft.ResourceId eq '*' ``` > [!NOTE]
-> A `Microsoft.ResourceId eq '*'` filter is added in the example for the multi resource metrics requests. The filter tells the API to return a separate time series per virtual machine resource in the subscription and region. Without the filter the API would return a single time series aggregating the average CPU for all VMs. The times series for each resource is differentiated by the `Microsoft.ResourceId` metadata value on each time series entry, as can be seen in the following sample return value. If there are no resourceIds retrieved by this query an empty time series`"timeseries": []` is returned.
+> A `Microsoft.ResourceId eq '*'` filter is added in the example for the multi resource metrics requests. The `*` filter tells the API to return a separate time series for each virtual machine resource that has data in the subscription and region. Without the filter the API would return a single time series aggregating the average CPU for all VMs. The times series for each resource is differentiated by the `Microsoft.ResourceId` metadata value on each time series entry, as can be seen in the following sample return value. If there are no resourceIds retrieved by this query an empty time series`"timeseries": []` is returned.
```JSON {
GET https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef0123
"resourceregion": "eastus" } ```-
+
### Troubleshooting querying metrics for multiple resources + Empty time series returned `"timeseries": []`
GET https://management.azure.com/subscriptions/12345678-abcd-98765432-abcdef0123
- Another common cause is specifying a filter that doesn't match any resources. For example, if the filter specifies a dimension value that doesn't exist on any resources in the subscription and region combination, `"timeseries": []` is returned. + Wildcard filters
- Using a wildcard filter such as `Microsoft.ResourceId eq '*'` causes the API to return a time series for every resourceId in the subscription and region. If the subscription and region combination contains no resources, an empty time series is returned. The same query without the wildcard filter would return a single time series, aggregating the requested metric over the requested dimensions, for example subscription and region. If there are no resources in the subscription and region combination, the API returns a single time series with a single data point of `0`.
+ Using a wildcard filter such as `Microsoft.ResourceId eq '*'` causes the API to return a time series for every resourceId in the subscription and region. If the subscription and region combination contains no resources, an empty time series is returned. The same query without the wildcard filter would return a single time series, aggregating the requested metric over the requested dimensions, for example subscription and region.
+ 401 authorization errors: The individual resource metrics APIs requires a user have the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) permission on the resource being queried. Because the multi resource metrics APIs are subscription level APIs, users must have the [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) permission for the queried subscription to use the multi resource metrics APIs. Even if users have Monitoring Reader on all the resources in a subscription, the request fails if the user doesn't have Monitoring Reader on the subscription itself. - ## Next steps - Review the [overview of monitoring](../overview.md).
azure-monitor Code Optimizations Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations-troubleshoot.md
+
+ Title: Troubleshoot Code Optimizations (Preview)
+description: Learn how to use Application Insights Code Optimizations on Azure. View a checklist of troubleshooting steps.
++
+editor: v-jsitser
+++ Last updated : 06/25/2024+++
+# Troubleshoot Code Optimizations (Preview)
+
+This article provides troubleshooting steps and information to use Application Insights Code Optimizations for Microsoft Azure.
+
+## Troubleshooting checklist
+
+### Step 1: View a video about Code Optimizations setup
+
+View the following demonstration video to learn how to set up Code Optimizations correctly.
+
+> [!VIDEO https://www.youtube-nocookie.com/embed/vbi9YQgIgC8]
+
+### Step 2: Make sure that your app is connected to an Application Insights resource
+
+[Create an Application Insights resource](/azure/azure-monitor/app/create-workspace-resource) and verify that it's connected to the correct app.
+
+### Step 3: Verify that Application Insights Profiler is enabled
+
+[Enable Application Insights Profiler](/azure/azure-monitor/profiler/profiler-overview).
+
+### Step 4: Verify that Application Insights Profiler is collecting profiles
+
+To make sure that profiles are uploaded to your Application Insights resource, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Application Insights**.
+1. In the list of Application Insights resources, select the name of your resource.
+1. In the navigation pane of your Application Insights resource, locate the **Investigate** heading, and then select **Performance**.
+1. On the **Performance** page of your Application Insights resource, select **Profiler**:
+
+ :::image type="content" source="./media/code-optimizations-troubleshoot/performance-page.png" alt-text="Azure portal screenshot that shows how to navigate to the Application Insights Profiler.":::
+
+1. On the **Application Insights Profiler** page, view the **Recent profiling sessions** section.
+
+ :::image type="content" source="./media/code-optimizations-troubleshoot/profiling-sessions.png" alt-text="Azure portal screenshot of the Application Insights Profiler page." lightbox="./media/code-optimizations-troubleshoot/profiling-sessions.png":::
+
+ > [!NOTE]
+ > If you don't see any profiling sessions, see [Troubleshoot Application Insights Profiler](../profiler/profiler-troubleshooting.md).
+
+### Step 5: Regularly check the Profiler
+
+After you successfully complete the previous steps, keep checking the Profiler for insights. Meanwhile, the service continues to analyze your profiles and provide insights as soon as it detects any issues in your code. After you enable the Application Insights Profiler, several hours might be required for you to generate profiles and for the service to analyze them. If the service detects no issues in your code, a message appears that confirms that no insights were found.
+
+## Contact us for help
+
+If you have questions or need help, [create a support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview?DMC=troubleshoot), or ask [Azure community support](https://azure.microsoft.com/support/community). You can also submit product feedback to [Azure feedback community](https://feedback.azure.com/d365community).
azure-monitor Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/code-optimizations.md
Previously updated : 03/08/2024 Last updated : 06/25/2024
Get started with Code Optimizations by enabling the following features on your a
- [Application Insights](../app/create-workspace-resource.md) - [Application Insights Profiler](../profiler/profiler-overview.md)
-Running into issues? Check the [Troubleshooting guide](/troubleshoot/azure/azure-monitor/app-insights/code-optimizations-troubleshooting)
+Running into issues? Check the [Troubleshooting guide](./code-optimizations-troubleshoot.md)
azure-monitor View Code Optimizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/view-code-optimizations.md
Previously updated : 03/05/2024 Last updated : 06/25/2024
You can also view a graph depicting a specific performance issue's impact and th
## Next steps > [!div class="nextstepaction"]
-> [Troubleshoot Code Optimizations](/troubleshoot/azure/azure-monitor/app-insights/code-optimizations-troubleshooting)
+> [Troubleshoot Code Optimizations](./code-optimizations-troubleshoot.md)
azure-monitor Computer Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/computer-groups.md
Last updated 03/14/2023
# Computer groups in Azure Monitor log queries Computer groups in Azure Monitor allow you to scope [log queries](./log-query-overview.md) to a particular set of computers. Each group is populated with computers using a query that you define. When the group is included in a log query, the results are limited to records that match the computers in the group. ## Permissions required
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
To perform cluster-related actions, you need these permissions:
For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md).
+## Resource Manager template samples
+
+This article includes sample [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/syntax.md) to create and configure Log Analytics clusters in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template.
++
+### Template references
+
+- [Microsoft.OperationalInsights clusters](/azure/templates/microsoft.operationalinsights/2020-03-01-preview/clusters)
+ ## Create a dedicated cluster Provide the following properties when creating new dedicated cluster:
Content-type: application/json
Should be 202 (Accepted) and a header.
+#### [ARM template (Bicep)](#tab/bicep)
+
+The following sample creates a new empty Log Analytics cluster.
+
+```bicep
+@description('Specify the name of the Log Analytics cluster.')
+param clusterName string
+
+@description('Specify the location of the resources.')
+param location string = resourceGroup().location
+
+@description('Specify the capacity reservation value.')
+@allowed([
+ 100
+ 200
+ 300
+ 400
+ 500
+ 1000
+ 2000
+ 5000
+])
+param CommitmentTier int
+
+@description('Specify the billing type settings. Can be \'Cluster\' (default) or \'Workspaces\' for proportional billing on workspaces.')
+@allowed([
+ 'Cluster'
+ 'Workspaces'
+])
+param billingType string
+
+resource cluster 'Microsoft.OperationalInsights/clusters@2021-06-01' = {
+ name: clusterName
+ location: location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ sku: {
+ name: 'CapacityReservation'
+ capacity: CommitmentTier
+ }
+ properties: {
+ billingType: billingType
+ }
+}
+```
+
+**Parameter file**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "CommitmentTier": {
+ "value": 500
+ },
+ "billingType": {
+ "value": "Cluster"
+ }
+ }
+}
+```
+
+#### [ARM template (JSON)](#tab/json)
+
+The following sample creates a new empty Log Analytics cluster.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the name of the Log Analytics cluster."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specify the location of the resources."
+ }
+ },
+ "CommitmentTier": {
+ "type": "int",
+ "allowedValues": [
+ 100,
+ 200,
+ 300,
+ 400,
+ 500,
+ 1000,
+ 2000,
+ 5000
+ ],
+ "metadata": {
+ "description": "Specify the capacity reservation value."
+ }
+ },
+ "billingType": {
+ "type": "string",
+ "allowedValues": [
+ "Cluster",
+ "Workspaces"
+ ],
+ "metadata": {
+ "description": "Specify the billing type settings. Can be 'Cluster' (default) or 'Workspaces' for proportional billing on workspaces."
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.OperationalInsights/clusters",
+ "apiVersion": "2021-06-01",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "sku": {
+ "name": "CapacityReservation",
+ "capacity": "[parameters('CommitmentTier')]"
+ },
+ "properties": {
+ "billingType": "[parameters('billingType')]"
+ }
+ }
+ ]
+}
+```
+
+**Parameter file**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "CommitmentTier": {
+ "value": 500
+ },
+ "billingType": {
+ "value": "Cluster"
+ }
+ }
+}
+```
+ ### Check cluster provisioning status
Send a GET request on the cluster resource and look at the *provisioningState* v
The managed identity service generates the *principalId* GUID when you create the cluster.
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+
+#### [ARM template (JSON)](#tab/json)
+
+N/A
+ ## Link a workspace to a cluster
Select your cluster from **Log Analytics dedicated clusters** menu in the Azure
:::image type="content" source="./media/logs-dedicated-cluster/linked-workspaces.png" alt-text="Screenshot for linking workspaces to a dedicated cluster in the Azure portal." lightbox="./media/logs-dedicated-cluster/linked-workspaces.png"::: -- #### [CLI](#tab/cli) > [!NOTE]
Content-type: application/json
202 (Accepted) and header. -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+
+#### [ARM template (JSON)](#tab/json)
+
+N/A
+ ### Check workspace link status+ The workspace link operation can take up to 90 minutes to complete. You can check the status on both the linked workspaces and the cluster. When completed, the workspace resources will include `clusterResourceId` property under `features`, and the cluster will include linked workspaces under `associatedWorkspaces` section. When a cluster is configured with a customer managed key, data ingested to the workspaces after the link operation is complete will be stored encrypted with your key. - #### [Portal](#tab/azure-portal) On the **Overview** page for your dedicated cluster, select **JSON View**. The `associatedWorkspaces` section lists the workspaces linked to the cluster. :::image type="content" source="./media/logs-dedicated-cluster/associated-workspaces.png" alt-text="Screenshot for viewing associated workspaces for a dedicated cluster in the Azure portal." lightbox="./media/logs-dedicated-cluster/associated-workspaces.png"::: - #### [CLI](#tab/cli) ```azurecli
Authorization: Bearer <token>
} ``` -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+#### [ARM template (JSON)](#tab/json)
+
+N/A
++ ## Change cluster properties
After you create your cluster resource and it's fully provisioned, you can edit
- **keyVaultProperties** - Contains the key in Azure Key Vault with the following parameters: *KeyVaultUri*, *KeyName*, *KeyVersion*. See [Update cluster with Key identifier details](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details). - **Identity** - The identity used to authenticate to your Key Vault. This can be System-assigned or User-assigned. - **billingType** - Billing attribution for the cluster resource and its data. Includes on the following values:
- - **Cluster (default)**--The costs for your cluster are attributed to the cluster resource.
- - **Workspaces**--The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./cost-logs.md#dedicated-clusters) to learn more about the cluster pricing model.
-
+ - **Cluster (default)** - The costs for your cluster are attributed to the cluster resource.
+ - **Workspaces** - The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./cost-logs.md#dedicated-clusters) to learn more about the cluster pricing model.
>[!IMPORTANT] >Cluster update should not include both identity and key identifier details in the same operation. If you need to update both, the update should be in two consecutive operations.
+<!--
> [!NOTE] > The *billingType* property isn't supported in CLI.
+-->
-## Get all clusters in resource group
+#### [Portal](#tab/azure-portal)
+
+N/A
+
+#### [CLI](#tab/cli)
+
+The following sample updates the billing type.
+```azurecli
+az account set --subscription "cluster-subscription-id"
+
+az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --billing-type {Cluster, Workspaces}
+```
+
+#### [PowerShell](#tab/powershell)
+
+The following sample updates the billing type.
+
+```powershell
+Select-AzSubscription "cluster-subscription-id"
+
+Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -BillingType "Workspaces"
+```
+
+#### [REST API](#tab/restapi)
+
+The following sample updates the billing type.
+
+*Call*
+
+```rest
+PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2022-10-01
+Authorization: Bearer <token>
+Content-type: application/json
+
+{
+ "properties": {
+ "billingType": "Workspaces"
+ },
+ "location": "region"
+}
+```
+
+#### [ARM template (Bicep)](#tab/bicep)
+
+The following sample updates a Log Analytics cluster to use customer-managed key.
+
+```bicep
+@description('Specify the name of the Log Analytics cluster.')
+param clusterName string
+@description('Specify the location of the resources')
+param location string = resourceGroup().location
+@description('Specify the key vault name.')
+param keyVaultName string
+@description('Specify the key name.')
+param keyName string
+@description('Specify the key version. When empty, latest key version is used.')
+param keyVersion string
+var keyVaultUri = format('{0}{1}', keyVaultName, environment().suffixes.keyvaultDns)
+resource cluster 'Microsoft.OperationalInsights/clusters@2021-06-01' = {
+ name: clusterName
+ location: location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ keyVaultProperties: {
+ keyVaultUri: keyVaultUri
+ keyName: keyName
+ keyVersion: keyVersion
+ }
+ }
+}
+```
+
+**Parameter file**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "keyVaultUri": {
+ "value": "https://key-vault-name.vault.azure.net"
+ },
+ "keyName": {
+ "value": "MyKeyName"
+ },
+ "keyVersion": {
+ "value": ""
+ }
+ }
+}
+```
+
+#### [ARM template (JSON)](#tab/json)
+
+The following sample updates a Log Analytics cluster to use customer-managed key.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the name of the Log Analytics cluster."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]",
+ "metadata": {
+ "description": "Specify the location of the resources"
+ }
+ },
+ "keyVaultName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the key vault name."
+ }
+ },
+ "keyName": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the key name."
+ }
+ },
+ "keyVersion": {
+ "type": "string",
+ "metadata": {
+ "description": "Specify the key version. When empty, latest key version is used."
+ }
+ }
+ },
+ "variables": {
+ "keyVaultUri": "[format('{0}{1}', parameters('keyVaultName'), environment().suffixes.keyvaultDns)]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.OperationalInsights/clusters",
+ "apiVersion": "2021-06-01",
+ "name": "[parameters('clusterName')]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "keyVaultProperties": {
+ "keyVaultUri": "[variables('keyVaultUri')]",
+ "keyName": "[parameters('keyName')]",
+ "keyVersion": "[parameters('keyVersion')]"
+ }
+ }
+ }
+ ]
+}
+```
+
+**Parameter file**
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "clusterName": {
+ "value": "MyCluster"
+ },
+ "keyVaultUri": {
+ "value": "https://key-vault-name.vault.azure.net"
+ },
+ "keyName": {
+ "value": "MyKeyName"
+ },
+ "keyVersion": {
+ "value": ""
+ }
+ }
+}
+```
+++
+## Get all clusters in resource group
#### [Portal](#tab/azure-portal)
Authorization: Bearer <token>
} ``` -
+#### [ARM template (Bicep)](#tab/bicep)
+N/A
+#### [ARM template (JSON)](#tab/json)
+
+N/A
++ ## Get all clusters in subscription
From the **Log Analytics dedicated clusters** menu in the Azure portal, select t
:::image type="content" source="./media/logs-dedicated-cluster/subscription-clusters.png" alt-text="Screenshot for viewing all dedicated clusters in a subscription in the Azure portal." lightbox="./media/logs-dedicated-cluster/subscription-clusters.png"::: -- #### [CLI](#tab/cli) ```azurecli
Authorization: Bearer <token>
The same as for 'clusters in a resource group', but in subscription scope. -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+
+#### [ARM template (JSON)](#tab/json)
+
+N/A
+ ## Update commitment tier in cluster
Content-type: application/json
} ``` ---
-### Update billingType in cluster
-
-The *billingType* property determines the billing attribution for the cluster and its data:
-- *Cluster* (default) -- billing is attributed to the Cluster resource-- *Workspaces* -- billing is attributed to linked workspaces proportionally. When data volume from all linked workspaces is below Commitment Tier level, the bill for the remaining volume is attributed to the cluster-
-#### [Portal](#tab/azure-portal)
+#### [ARM template (Bicep)](#tab/bicep)
N/A
-#### [CLI](#tab/cli)
-
-```azurecli
-az account set --subscription "cluster-subscription-id"
-
-az monitor log-analytics cluster update --resource-group "resource-group-name" --name "cluster-name" --billing-type {Cluster, Workspaces}
-```
-
-#### [PowerShell](#tab/powershell)
-
-```powershell
-Select-AzSubscription "cluster-subscription-id"
-
-Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -BillingType "Workspaces"
-```
-
-#### [REST API](#tab/restapi)
-
-*Call*
-
-```rest
-PATCH https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.OperationalInsights/clusters/<cluster-name>?api-version=2022-10-01
-Authorization: Bearer <token>
-Content-type: application/json
+#### [ARM template (JSON)](#tab/json)
-{
- "properties": {
- "billingType": "Workspaces"
- },
- "location": "region"
-}
-```
+N/A
Select your cluster from **Log Analytics dedicated clusters** menu in the Azure
:::image type="content" source="./media/logs-dedicated-cluster/unlink-workspace.png" alt-text="Screenshot for unlinking a workspace from a dedicated cluster in the Azure portal." lightbox="./media/logs-dedicated-cluster/unlink-workspace.png"::: - #### [CLI](#tab/cli) ```azurecli
Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-nam
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/linkedServices/{linkedServiceName}?api-version=2020-08-01 ``` -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+
+#### [ARM template (JSON)](#tab/json)
+N/A
++ ## Delete cluster
Authorization: Bearer <token>
200 OK -
+#### [ARM template (Bicep)](#tab/bicep)
+
+N/A
+#### [ARM template (JSON)](#tab/json)
+N/A
++ ## Limits and constraints
Authorization: Bearer <token>
## Next steps -- Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters)-- Learn about [proper design of Log Analytics workspaces](../logs/workspace-design.md)
+- Learn about [Log Analytics dedicated cluster billing](cost-logs.md#dedicated-clusters).
+- Learn about [proper design of Log Analytics workspaces](../logs/workspace-design.md).
+- Get other [sample templates for Azure Monitor](../resource-manager-samples.md).
azure-monitor Personal Data Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/personal-data-mgmt.md
Log Analytics is a data store where personal data is likely to be found. Applica
In this article, _log data_ refers to data sent to a Log Analytics workspace, while _application data_ refers to data collected by Application Insights. If you're using a workspace-based Application Insights resource, the information on log data applies. If you're using a classic Application Insights resource, the application data applies. ## Strategy for personal data handling
azure-monitor Resource Manager Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/resource-manager-cluster.md
- Title: Resource Manager template samples for Log Analytics clusters
-description: Sample Azure Resource Manager templates to deploy Log Analytics clusters.
--- Previously updated : 06/13/2022--
-# Resource Manager template samples for Log Analytics clusters in Azure Monitor
-
-This article includes sample [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure Log Analytics clusters in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template.
--
-## Template references
--- [Microsoft.OperationalInsights clusters](/azure/templates/microsoft.operationalinsights/2020-03-01-preview/clusters)-
-## Create a Log Analytics cluster
-
-The following sample creates a new empty Log Analytics cluster.
-
-### Template file
-
-# [Bicep](#tab/bicep)
-
-```bicep
-@description('Specify the name of the Log Analytics cluster.')
-param clusterName string
-
-@description('Specify the location of the resources.')
-param location string = resourceGroup().location
-
-@description('Specify the capacity reservation value.')
-@allowed([
- 100
- 200
- 300
- 400
- 500
- 1000
- 2000
- 5000
-])
-param CommitmentTier int
-
-@description('Specify the billing type settings. Can be \'Cluster\' (default) or \'Workspaces\' for proportional billing on workspaces.')
-@allowed([
- 'Cluster'
- 'Workspaces'
-])
-param billingType string
-
-resource cluster 'Microsoft.OperationalInsights/clusters@2021-06-01' = {
- name: clusterName
- location: location
- identity: {
- type: 'SystemAssigned'
- }
- sku: {
- name: 'CapacityReservation'
- capacity: CommitmentTier
- }
- properties: {
- billingType: billingType
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "type": "string",
- "metadata": {
- "description": "Specify the name of the Log Analytics cluster."
- }
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]",
- "metadata": {
- "description": "Specify the location of the resources."
- }
- },
- "CommitmentTier": {
- "type": "int",
- "allowedValues": [
- 100,
- 200,
- 300,
- 400,
- 500,
- 1000,
- 2000,
- 5000
- ],
- "metadata": {
- "description": "Specify the capacity reservation value."
- }
- },
- "billingType": {
- "type": "string",
- "allowedValues": [
- "Cluster",
- "Workspaces"
- ],
- "metadata": {
- "description": "Specify the billing type settings. Can be 'Cluster' (default) or 'Workspaces' for proportional billing on workspaces."
- }
- }
- },
- "resources": [
- {
- "type": "Microsoft.OperationalInsights/clusters",
- "apiVersion": "2021-06-01",
- "name": "[parameters('clusterName')]",
- "location": "[parameters('location')]",
- "identity": {
- "type": "SystemAssigned"
- },
- "sku": {
- "name": "CapacityReservation",
- "capacity": "[parameters('CommitmentTier')]"
- },
- "properties": {
- "billingType": "[parameters('billingType')]"
- }
- }
- ]
-}
-```
---
-### Parameter file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "value": "MyCluster"
- },
- "CommitmentTier": {
- "value": 500
- },
- "billingType": {
- "value": "Cluster"
- }
- }
-}
-```
-
-## Update a Log Analytics cluster
-
-The following sample updates a Log Analytics cluster to use customer-managed key.
-
-### Template file
-
-# [Bicep](#tab/bicep)
-
-```bicep
-@description('Specify the name of the Log Analytics cluster.')
-param clusterName string
-
-@description('Specify the location of the resources')
-param location string = resourceGroup().location
-
-@description('Specify the key vault name.')
-param keyVaultName string
-
-@description('Specify the key name.')
-param keyName string
-
-@description('Specify the key version. When empty, latest key version is used.')
-param keyVersion string
-
-var keyVaultUri = format('{0}{1}', keyVaultName, environment().suffixes.keyvaultDns)
-
-resource cluster 'Microsoft.OperationalInsights/clusters@2021-06-01' = {
- name: clusterName
- location: location
- identity: {
- type: 'SystemAssigned'
- }
- properties: {
- keyVaultProperties: {
- keyVaultUri: keyVaultUri
- keyName: keyName
- keyVersion: keyVersion
- }
- }
-}
-```
-
-# [JSON](#tab/json)
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "type": "string",
- "metadata": {
- "description": "Specify the name of the Log Analytics cluster."
- }
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]",
- "metadata": {
- "description": "Specify the location of the resources"
- }
- },
- "keyVaultName": {
- "type": "string",
- "metadata": {
- "description": "Specify the key vault name."
- }
- },
- "keyName": {
- "type": "string",
- "metadata": {
- "description": "Specify the key name."
- }
- },
- "keyVersion": {
- "type": "string",
- "metadata": {
- "description": "Specify the key version. When empty, latest key version is used."
- }
- }
- },
- "variables": {
- "keyVaultUri": "[format('{0}{1}', parameters('keyVaultName'), environment().suffixes.keyvaultDns)]"
- },
- "resources": [
- {
- "type": "Microsoft.OperationalInsights/clusters",
- "apiVersion": "2021-06-01",
- "name": "[parameters('clusterName')]",
- "location": "[parameters('location')]",
- "identity": {
- "type": "SystemAssigned"
- },
- "properties": {
- "keyVaultProperties": {
- "keyVaultUri": "[variables('keyVaultUri')]",
- "keyName": "[parameters('keyName')]",
- "keyVersion": "[parameters('keyVersion')]"
- }
- }
- }
- ]
-}
-```
---
-### Parameter file
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "clusterName": {
- "value": "MyCluster"
- },
- "keyVaultUri": {
- "value": "https://key-vault-name.vault.azure.net"
- },
- "keyName": {
- "value": "MyKeyName"
- },
- "keyVersion": {
- "value": ""
- }
- }
-}
-```
-
-## Next steps
--- [Get other sample templates for Azure Monitor](../resource-manager-samples.md).-- [Learn more about Log Analytics dedicated clusters](./logs-dedicated-clusters.md).-- [Learn more about agent data sources](../agents/agent-data-sources.md).
azure-monitor Summary Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/summary-rules.md
A summary rule lets you aggregate log data at a regular cadence and send the agg
This article describes how summary rules work and how to define and view summary rules, and provides some examples of the use and benefits of summary rules.
-## Permissions required
-
-| Action | Permissions required |
-| | |
-| Create or update summary rule | `Microsoft.Operationalinsights/workspaces/summarylogs/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](manage-access.md#log-analytics-contributor), for example |
-| Create or update destination table | `Microsoft.OperationalInsights/workspaces/tables/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](manage-access.md#log-analytics-contributor), for example |
-| Enable query in workspace | `Microsoft.OperationalInsights/workspaces/query/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
-| Query all logs in workspace | `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
-| Query logs in table | `Microsoft.OperationalInsights/workspaces/query/<table>/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
-| Query logs in table (table action) | `Microsoft.OperationalInsights/workspaces/tables/query/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
-| Use queries encrypted in a customer-managed storage account|`Microsoft.Storage/storageAccounts/*` permissions to the storage account, as provided by the [Storage Account Contributor built-in role](/azure/role-based-access-control/built-in-roles/storage#storage-account-contributor), for example|
-- ## How summary rules work Summary rules perform batch processing directly in your Log Analytics workspace. The summary rule aggregates chunks of data, defined by bin size, based on a KQL query, and reingests the summarized results into a custom table with an [Analytics log plan](basic-logs-configure.md) in your Log Analytics workspace.
Here's the aggregated data that the summary rule sends to the destination table:
Instead of logging hundreds of similar entries within an hour, the destination table shows the count of each unique entry, as defined in the KQL query. Set the [Basic data plan](basic-logs-configure.md) on the `ContainerLogsV2` table for cheap retention of the raw data, and use the summarized data in the destination table for your analysis needs.
+## Permissions required
+
+| Action | Permissions required |
+| | |
+| Create or update summary rule | `Microsoft.Operationalinsights/workspaces/summarylogs/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](manage-access.md#log-analytics-contributor), for example |
+| Create or update destination table | `Microsoft.OperationalInsights/workspaces/tables/write` permissions to the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](manage-access.md#log-analytics-contributor), for example |
+| Enable query in workspace | `Microsoft.OperationalInsights/workspaces/query/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
+| Query all logs in workspace | `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
+| Query logs in table | `Microsoft.OperationalInsights/workspaces/query/<table>/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
+| Query logs in table (table action) | `Microsoft.OperationalInsights/workspaces/tables/query/read` permissions to the Log Analytics workspace, as provided by the [Log Analytics Reader built-in role](manage-access.md#log-analytics-reader), for example |
+| Use queries encrypted in a customer-managed storage account|`Microsoft.Storage/storageAccounts/*` permissions to the storage account, as provided by the [Storage Account Contributor built-in role](/azure/role-based-access-control/built-in-roles/storage#storage-account-contributor), for example|
++ ## Restrictions and limitations | Category | Limit |
Instead of logging hundreds of similar entries within an hour, the destination t
## Pricing model
-The cost you incur for summary rules consists of the cost of the query on the source table and the cost of ingesting the results to the destination table:
+There is no direct cost using Summary rules, and cost you incur consists of the cost of the query on the source table and the cost of ingesting the results to the destination table:
| Source table plan | Query cost | Query results ingestion cost | | | | |
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pr
## Create or update a summary rule
-Before you create a rule, experiment with the query in [Log Analytics](log-analytics-overview.md). Verify that the query doesn't reach or near the query limit. Check that the query produces the intended schema and expected results. If the query is close to the query limits, consider using a smaller `binSize` to process less data per bin. You can also modify the query to return fewer records or remove fields with higher volume.
+Before you create a rule, experiment with the query in [Log Analytics](log-analytics-overview.md). Verify that the query doesn't reach or near the query limit. Check that the query produces the intended schema and expected results. If the query is close to the query limits, consider using a smaller `binSize` to process less data per bin. You can also modify the query to return fewer records or remove fields with higher volume.
+
+> [!NOTE]
+> Summary rules are most beneficial in term of cost and results consumption when reduced significantly. For example, results volume is 0.01% or less than source.
When you update a query and remove output fields from the results set, Azure Monitor doesn't automatically remove the columns from the destination table. You need to [delete columns from your table](create-custom-table.md#add-or-delete-a-custom-column) manually.
If you don't need the summary results in the destination table, delete the rule
The destination table schema is defined when you create or update a summary rule. If the query in the summary rule includes operators that allow output schema expansion based on incoming data - for example, if the query uses the `arg_max(expression, *)` function - Azure Monitor doesn't add new columns to the destination table after you create or update the summary rule, and the output data that requires these columns will be dropped. To add the new fields to the destination table, [update the summary rule](#create-or-update-a-summary-rule) or [add a column to your table manually](create-custom-table.md#add-or-delete-a-custom-column).
-### Deleted data remains in workspace, subject to retention period
+### Data for removed columns remains in workspace, subject to retention period
-When you [delete columns or a custom log table](create-custom-table.md), data remains in the workspace and is subjected to the [retention period](data-retention-archive.md) defined on the table or workspace. During the retention period, if you create a table with the same name and fields, Azure Monitor recreates the table with the old data. To delete old data, [update the table retention period](/rest/api/loganalytics/tables/update) with the minimum retention supported (four days) and then delete the table.
+When you remove columns in query, the columns and data remain in destination table and is subjected to the [retention period](data-retention-archive.md) defined on the table or workspace. If the removed columns aren't needed in destination table, [Update schema and remove columns](create-custom-table.md#add-or-delete-a-custom-column) accordingly. During the retention period, if you add columns with the same name, old data that hasn't reached retention policy, shows up.
## Related content
azure-monitor Profiler Cloudservice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-cloudservice.md
Deploy your service with the new Diagnostics configuration. Application Insights
> [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md)
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-servicefabric.md
After you enable Application Insights, redeploy your application.
> [!div class="nextstepaction"] > [Generate load and view Profiler traces](./profiler-data.md)
azure-monitor Profiler Trackrequests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-trackrequests.md
To manually track requests:
} ``` ## Next steps
azure-monitor Profiler Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-vm.md
# Enable Profiler for web apps on an Azure virtual machine In this article, you learn how to run Application Insights Profiler on your Azure virtual machine (VM) or Azure virtual machine scale set via three different methods:
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
New-AzRoleDefinition -Role $role
## Assign a role To assign a role, see [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md).
azure-monitor Snapshot Debugger Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot.md
If that doesn't solve the problem, then refer to the following manual troublesho
Make sure you're using the correct instrumentation key in your published application. Usually, the instrumentation key is read from the *ApplicationInsights.config* file. Verify the value is the same as the instrumentation key for the Application Insights resource that you see in the portal. ## <a id="SSL"></a>Check TLS/SSL client settings (ASP.NET)
azure-monitor Snapshot Debugger Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-vm.md
internal class LoggerExample
> [!NOTE] > By default, the Application Insights Logger (`ApplicationInsightsLoggerProvider`) forwards exceptions to the Snapshot Debugger via `TelemetryClient.TrackException`. This behavior is controlled via the `TrackExceptionsAsExceptionTelemetry` property on the `ApplicationInsightsLoggerOptions` class. If you set `TrackExceptionsAsExceptionTelemetry` to `false` when configuring the Application Insights Logger, then the preceding example will not trigger the Snapshot Debugger. In this case, modify your code to call `TrackException` manually. ## Next steps
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
This section contains the release notes for the `Microsoft.ApplicationInsights.S
For bug reports and feedback, [open an issue on GitHub](https://github.com/microsoft/ApplicationInsights-SnapshotCollector). ### [1.4.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.6) A point release to address a regression when using .NET 8 applications.
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
To provide accurate and efficient troubleshooting capabilities, the Map feature
For more information about data collection and usage, see the [Microsoft Online Services Privacy Statement](https://go.microsoft.com/fwlink/?LinkId=512132). ## Next steps
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Use the Azure portal, PowerShell, or the Azure CLI to [register for NetApp Resou
# [Template](#tab/template) The following code snippet shows how to create a NetApp account in an Azure Resource Manager template (ARM template), using the [Microsoft.NetApp/netAppAccounts](/azure/templates/microsoft.netapp/netappaccounts) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
The following code snippet shows how to create a NetApp account in an Azure Reso
# [Template](#tab/template)
-<!-- [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -->
+<!-- [!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)] -->
The following code snippet shows how to create a capacity pool in an Azure Resource Manager template (ARM template), using the [Microsoft.NetApp/netAppAccounts/capacityPools](/azure/templates/microsoft.netapp/netappaccounts/capacitypools) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
The following code snippet shows how to create a capacity pool in an Azure Resou
# [Template](#tab/template)
-<!-- [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -->
+<!-- [!INCLUDE [About Azure Resource Manager](~/reusable-content/ce-skilling/azure/includes/resource-manager-quickstart-introduction.md)] -->
The following code snippets show how to set up a VNet and create an Azure NetApp Files volume in an Azure Resource Manager template (ARM template). VNet setup uses the [Microsoft.Network/virtualNetworks](/azure/templates/Microsoft.Network/virtualNetworks) resource. Volume creation uses the [Microsoft.NetApp/netAppAccounts/capacityPools/volumes](/azure/templates/microsoft.netapp/netappaccounts/capacitypools/volumes) resource. To run the code, download the [full ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.netapp/anf-nfs-volume/azuredeploy.json) from our GitHub repo.
azure-portal Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-bicep.md
Last updated 12/11/2023
A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and organized view of your cloud resources. This quickstart shows how to deploy a Bicep file to create a dashboard. The example dashboard shows the performance of a virtual machine (VM), along with some static information and links. ## Prerequisites
azure-portal Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quick-create-template.md
Last updated 12/11/2023
A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and organized view of your cloud resources. This quickstart shows how to deploy an Azure Resource Manager template (ARM template) to create a dashboard. The example dashboard shows the performance of a virtual machine (VM), along with some static information and links. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal, where you can edit the details (such as the VM used in the dashboard) before you deploy.
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-powershell.md
A [dashboard](azure-portal-dashboards.md) in the Azure portal is a focused and o
- If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell](/powershell/azure/install-azure-powershell). ## Choose a specific Azure subscription
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Information about your custom settings is stored in Azure. You can delete the fo
It's a good idea to export and review your settings before you delete them, as described in the previous section. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming. To delete your portal settings, select **Delete all settings and private dashboards** from the top of **My information**. You'll be prompted to confirm the deletion. When you do so, all settings customizations will return to the default settings, and all of your private dashboards will be lost.
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file
description: Describes the configuration file for your Bicep deployments Previously updated : 06/03/2024 Last updated : 06/27/2024 # Configure your Bicep environment
The [Bicep linter](linter.md) checks Bicep files for syntax errors and best prac
You can enable experimental features by adding the following section to your `bicepconfig.json` file.
-Here's an example of enabling features 'compileTimeImports' and 'userDefinedFunctions`.
+Here's an example of enabling features 'assertions' and 'testFramework`.
```json {
azure-resource-manager Bicep Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-deployment.md
Title: Bicep functions - deployment
description: Describes the functions to use in a Bicep file to retrieve deployment information. Previously updated : 03/20/2024 Last updated : 06/26/2024 # Deployment functions for Bicep
The preceding example returns the following object:
`environment()`
-Returns information about the Azure environment used for deployment.
+Returns information about the Azure environment used for deployment. The `environment()` function is not aware of resource configurations. It can only return a single default DNS suffix for each resource type.
Namespace: [az](bicep-functions.md#namespaces-for-functions).
azure-resource-manager Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md
To create a deployment stack at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-New-AzManagmentGroupDeploymentStack `
+New-AzManagementGroupDeploymentStack `
-Name "<deployment-stack-name>" ` -Location "<location>" ` -TemplateFile "<bicep-file-name>" `
To update a deployment stack at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-Set-AzManagmentGroupDeploymentStack `
+Set-AzManagementGroupDeploymentStack `
-Name "<deployment-stack-name>" ` -Location "<location>" ` -TemplateFile "<bicep-file-name>" `
To apply deny settings at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-New-AzManagmentGroupDeploymentStack `
+New-AzManagementGroupDeploymentStack `
-Name "<deployment-stack-name>" ` -Location "<location>" ` -TemplateFile "<bicep-file-name>" `
To export a deployment stack at the management group scope:
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-Save-AzManagmentGroupDeploymentStack `
+Save-AzManagementGroupDeploymentStack `
-Name "<deployment-stack-name>" ` -ManagementGroupId "<management-group-id>" ```
azure-resource-manager Linter Rule No Deployments Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-deployments-resources.md
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
} ```
-Additionally, you can also refence ARM templates using the [module](./modules.md) statement.
+Additionally, you can also reference ARM templates using the [module](./modules.md) statement.
_main.bicep_:
azure-resource-manager Publish Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-managed-identity.md
When you link the deployment of the managed application to existing resources, b
"type": "Microsoft.Common.TextBox", "label": "Network interface resource ID", "defaultValue": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testRG/providers/Microsoft.Network/networkInterfaces/existingnetworkinterface",
- "toolTip": "Must represent the identity as an Azure Resource Manager resource identifer format ex. /subscriptions/sub1/resourcegroups/myGroup/providers/Microsoft.Network/networkInterfaces/networkinterface1",
+ "toolTip": "Must represent the identity as an Azure Resource Manager resource identifier format ex. /subscriptions/sub1/resourcegroups/myGroup/providers/Microsoft.Network/networkInterfaces/networkinterface1",
"visible": true }, {
When you link the deployment of the managed application to existing resources, b
"type": "Microsoft.Common.TextBox", "label": "User-assigned managed identity resource ID", "defaultValue": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testRG/providers/Microsoft.ManagedIdentity/userassignedidentites/myuserassignedidentity",
- "toolTip": "Must represent the identity as an Azure Resource Manager resource identifer format ex. /subscriptions/sub1/resourcegroups/myGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity1",
+ "toolTip": "Must represent the identity as an Azure Resource Manager resource identifier format ex. /subscriptions/sub1/resourcegroups/myGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity1",
"visible": true } ]
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Pricing tiers determine the capacity and limits of your search service. Tiers in
**Limits per subscription** **Limits per search service** To learn more about limits on a more granular level, such as document size, queries per second, keys, requests, and responses, see [Service limits in Azure AI Search](../../search/search-limits-quotas-capacity.md).
The following table details the features and limits of the Basic, Standard, and
## Key Vault limits ## Managed identity limits
azure-resource-manager Manage Resource Groups Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md
Last updated 03/19/2024
Learn how to use the [Azure portal](https://portal.azure.com) with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using the Azure portal](manage-resources-portal.md). ## What is a resource group
azure-resource-manager Manage Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-portal.md
Last updated 03/19/2024
Learn how to use the [Azure portal](https://portal.azure.com) with [Azure Resource Manager](overview.md) to manage your Azure resources. For managing resource groups, see [Manage Azure resource groups by using the Azure portal](manage-resource-groups-portal.md). ## Deploy resources to a resource group
azure-resource-manager Manage Resources Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-rest.md
Learn how to use the REST API for [Azure Resource Manager](overview.md) to manag
## Obtain an access token To make a REST API call to Azure, you first need to obtain an access token. Include this access token in the headers of your Azure REST API calls using the "Authorization" header and setting the value to "Bearer {access-token}".
-If you need to programatically retrieve new tokens as part of your application, you can obtain an access token by [Registering your client application with Microsoft Entra ID](/rest/api/azure/#register-your-client-application-with-azure-ad).
+If you need to programmatically retrieve new tokens as part of your application, you can obtain an access token by [Registering your client application with Microsoft Entra ID](/rest/api/azure/#register-your-client-application-with-azure-ad).
If you are getting started and want to test Azure REST APIs using your individual token, you can retrieve your current access token quickly with either Azure PowerShell or Azure CLI.
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 06/13/2024 Last updated : 06/27/2024 # Move operation support for resources
Before starting your move operation, review the [checklist](./move-resource-grou
> | trafficmanagerusermetricskeys | No | No | No | > | virtualhubs | No | No | No | > | virtualnetworkgateways | No| No | No |
-> | virtualnetworks | **Yes** | **Yes** | No |
+> | virtualnetworks | **Yes** | **Yes** | **Yes** |
> | virtualnetworktaps | No | No | No | > | virtualrouters | **Yes** | **Yes** | No | > | virtualwans | No | No |
azure-resource-manager Request Limits And Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/request-limits-and-throttling.md
Title: Request limits and throttling description: Describes how to use throttling with Azure Resource Manager requests when subscription limits are reached. Previously updated : 03/15/2024 Last updated : 06/27/2024
msrest.http_logger : 'x-ms-ratelimit-remaining-subscription-writes': '1199'
## Next steps
-* For a complete PowerShell example, see [Check Resource Manager Limits for a Subscription](https://github.com/Microsoft/csa-misc-utils/tree/master/psh-GetArmLimitsViaAPI).
* For more information about limits and quotas, see [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md). * To learn about handling asynchronous REST requests, see [Track asynchronous Azure operations](async-operations.md).
azure-resource-manager Resource Manager Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-manager-personal-data.md
Last updated 03/19/2024
To avoid exposing sensitive information, delete any personal information you may have provided in deployments, resource groups, or tags. Azure Resource Manager provides operations that let you manage personal data you may have provided in deployments, resource groups, or tags. ## Delete personal data in deployment history
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 05/20/2024 Last updated : 06/26/2024 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | locks | scope of assignment | 1-90 | Alphanumerics, periods, underscores, hyphens, and parenthesis.<br><br>Can't end in period. |
-> | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
-> | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>*%&:\?+/` or control characters. <br><br>Can't end with period or space. |
-> | policyExemptions | scope of exemption | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
-> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>*%&:\?.+/` or control characters. <br><br>Can't end with period or space. |
+> | policyAssignments | scope of assignment | 1-128 display name<br><br>1-64 resource name<br><br>1-24 resource name at management group scope | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>%&:\?/` or control characters. <br><br>Can't end with period or space. |
+> | policyDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>%&:\?/` or control characters. <br><br>Can't end with period or space. |
+> | policyExemptions | scope of exemption | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>%&:\?/` or control characters. <br><br>Can't end with period or space. |
+> | policySetDefinitions | scope of definition | 1-128 display name<br><br>1-64 resource name | Display name can contain any characters.<br><br>Resource name can't use:<br>`#<>%&:\?/` or control characters. <br><br>Can't end with period or space. |
> | roleAssignments | tenant | 36 | Must be a globally unique identifier (GUID). | > | roleDefinitions | tenant | 36 | Must be a globally unique identifier (GUID). |
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
Resource tags support all cost-accruing services. To ensure that cost-accruing s
> > Tag values are case-sensitive. ## Required access
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | registries / models / versions | No | No | > | virtualclusters | Yes | Yes | > | workspaces | Yes | Yes |
-> | workspaces / batchEndpoints | Yes | No |
+> | workspaces / batchEndpoints | Yes | Yes |
> | workspaces / batchEndpoints / deployments | Yes | Yes |
-> | workspaces / batchEndpoints / deployments / jobs | No | No |
-> | workspaces / batchEndpoints / jobs | No | No |
+> | workspaces / batchEndpoints / deployments / jobs | No | Yes |
+> | workspaces / batchEndpoints / jobs | No | Yes |
> | workspaces / codes | No | No | > | workspaces / codes / versions | No | No | > | workspaces / components | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / schedules | No | No | > | workspaces / services | No | No |
-> [!NOTE]
-> Workspace tags don't propagate to compute clusters and compute instances. It is not supported with tracking cost at cluster/batch endpoint level.
- ## Microsoft.Maintenance > [!div class="mx-tableFixed"]
azure-resource-manager Template Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-deployment.md
Title: Template functions - deployment
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to retrieve deployment information. Previously updated : 08/22/2023 Last updated : 06/26/2024 # Deployment functions for ARM templates
For a subscription deployment, the following example returns a deployment object
`environment()`
-Returns information about the Azure environment used for deployment.
+Returns information about the Azure environment used for deployment. The `environment()` function is not aware of resource configurations. It can only return a single default DNS suffix for each resource type.
In Bicep, use the [environment](../bicep/bicep-functions-deployment.md#environment) function.
azure-resource-manager Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md
If your error code isn't listed, submit a GitHub issue. On the right side of the
| MissingSubscriptionRegistration | Register your subscription with the resource provider. | [Resolve registration](error-register-resource-provider.md) | | NoRegisteredProviderFound | Check resource provider registration status. | [Resolve registration](error-register-resource-provider.md) | | NotFound | You might be attempting to deploy a dependent resource in parallel with a parent resource. Check if you need to add a dependency. | [Resolve dependencies](error-not-found.md) |
-| OperationNotAllowed | There can be several reasons for this error message.<br><br>1. The deployment is attempting an operation which is not allowed on spcecified SKU.<br><br>2. The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
+| OperationNotAllowed | There can be several reasons for this error message.<br><br>1. The deployment is attempting an operation which is not allowed on specified SKU.<br><br>2. The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |
| OperationNotAllowedOnVMImageAsVMsBeingProvisioned | You might be attempting to delete an image that is currently being used to provision VMs. You cannot delete an image that is being used by any virtual machine during the deployment process. Retry the image delete operation after the deployment of the VM is complete. | | | ParentResourceNotFound | Make sure a parent resource exists before creating the child resources. | [Resolve parent resource](error-parent-resource.md) | | PasswordTooLong | You might have selected a password with too many characters, or converted your password value to a secure string before passing it as a parameter. If the template includes a **secure string** parameter, you don't need to convert the value to a secure string. Provide the password value as text. | |
azure-resource-manager Quickstart Troubleshoot Arm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/quickstart-troubleshoot-arm-deployment.md
This quickstart describes how to troubleshoot Azure Resource Manager template (ARM template) JSON deployment errors. You'll set up a template with errors and learn how to fix the errors. There are three types of errors that are related to a deployment:
azure-signalr Signalr Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-service.md
This sample script creates a new Azure SignalR Service resource in a new resource group with a random name. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample script creates a new Azure SignalR Service resource in a new resourc
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-signalr Signalr Cli Create With App Service Github Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-with-app-service-github-oauth.md
This sample script creates a new Azure SignalR Service resource, which is used to push real-time content updates to clients. This script also adds a new Web App and App Service plan to host your ASP.NET Core Web App that uses the SignalR Service. The web app is configured with app settings to connect to the new SignalR service resource, and authenticate with [GitHub authentication](https://developer.github.com/v3/guides/basics-of-authentication/). The web app is also configured to use a local git repository deployment source. ## Sample scripts ### Create the SignalR service with an App service
This sample script creates a new Azure SignalR Service resource, which is used t
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-signalr Signalr Cli Create With App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-with-app-service.md
This sample script creates a new Azure SignalR Service resource, which is used to push real-time content updates to clients. This script also adds a new Web App and App Service plan to host your ASP.NET Core Web App that uses the SignalR Service. The web app is configured with an App Setting named *AzureSignalRConnectionString* to connect to the new SignalR service resource. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample script creates a new Azure SignalR Service resource, which is used t
## Clean up resources ```azurecli az group delete --name $resourceGroup
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
In this tutorial, you learn how to:
> - Add an authentication controller to support GitHub authentication > - Deploy your ASP.NET Core web app to Azure ## Prerequisites
azure-signalr Signalr Howto Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-event-grid-integration.md
Azure Event Grid is a fully managed event routing service that provides uniform event consumption using a pub-sub model. In this guide, you use the Azure CLI to create an Azure SignalR Service, subscribe to connection events, then deploy a sample web application to receive the events. Finally, you can connect and disconnect and see the event payload in the sample application. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-signalr Signalr Quickstart Azure Signalr Service Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md
This quickstart walks you through the process of creating an Azure SignalR Service using an Azure Resource Manager (ARM) template. You can deploy the Azure SignalR Service through the Azure portal, PowerShell, or CLI. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal once you sign in.
azure-signalr Signalr Quickstart Azure Signalr Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-bicep.md
This quickstart describes how to use Bicep to create an Azure SignalR Service using Azure CLI or PowerShell. ## Prerequisites
azure-signalr Signalr Quickstart Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-rest-api.md
This quickstart can be run on macOS, Windows, or Linux.
* [.NET Core SDK](https://dotnet.microsoft.com/download) * A text editor or code editor of your choice. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsapi).
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 6/7/2024 Last updated : 6/27/2024 # Known issues: Azure VMware Solution
Refer to the table to find details about resolution dates or possible workaround
| After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than four clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue should get detected through Microsoft, however you can also open a support request. | 2023 | | When I build a VMware HCX Service Mesh with the Enterprise license, the Replication Assisted vMotion Migration option isn't available. | 2023 | The default VMware HCX Compute Profile doesn't have the Replication Assisted vMotion Migration option enabled. From the Azure VMware Solution vSphere Client, select the VMware HCX option and edit the default Compute Profile to enable Replication Assisted vMotion Migration. | 2023 | | [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. Azure VMware Solution is currently rolling out [7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) to address this issue. | March 2024 - Resolved in [ESXi 7.0U3o](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3o-release-notes/https://docsupdatetracker.net/index.html) |
-| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | Use AV36, AV36P, or AV52 SKUs when RAID-6 FTT2 or RAID-1 FTT3 storage policies are needed. | N/A |
+| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | The AV64 SKU now supports 7 Fault Domains and all vSAN storage policies. For more information, see [AV64 supported Azure regions](architecture-private-clouds.md#azure-region-availability-zone-az-to-sku-mapping-table) | June 2024 |
| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you are using NE appliances in a HA configuration. | Feb 2024 - Resolved in [VMware HCX 4.8.2](https://docs.vmware.com/en/VMware-HCX/4.8.2/rn/vmware-hcx-482-release-notes/https://docsupdatetracker.net/index.html) | | [VMSA-2024-0006](https://www.vmware.com/security/advisories/VMSA-2024-0006.html) ESXi Use-after-free and Out-of-bounds write vulnerability | March 2024 | Microsoft has confirmed the applicability of the vulnerabilities and is rolling out the provided VMware updates. | March 2024 - Resolved in [vCenter Server 7.0 U3o & ESXi 7.0 U3o](architecture-private-clouds.md#vmware-software-versions) | | When I run the VMware HCX Service Mesh Diagnostic wizard, all diagnostic tests will be passed (green check mark), yet failed probes will be reported. See [HCX - Service Mesh diagnostics test returns 2 failed probes](https://knowledge.broadcom.com/external/article?legacyId=96708) | 2024 | None, this will be fixed in 4.9+. | N/A | | [VMSA-2024-0011](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24308) Out-of-bounds read/write vulnerability (CVE-2024-22273) | June 2024 | Microsoft has confirmed the applicability of the CVE-2024-22273 vulnerability and it will be addressed in the upcoming 8.0u2b Update. | July 2024 | | [VMSA-2024-0012](https://support.broadcom.com/web/ecx/support-content-notification/-/external/content/SecurityAdvisories/0/24453) Multiple Vulnerabilities in the DCERPC Protocol and Local Privilege Escalations | June 2024 | Microsoft, working with Broadcom, adjudicated the risk of these vulnerabilities at an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. A plan is being put in place to address these vulnerabilities at a future date TBD. | N/A |
+| Zerto DR is not currently supported with the AV64 SKU. The AV64 SKU uses ESXi host secure boot and Zerto DR has not implemented a signed VIB for the ESXi install. | 2024 | Continue using the AV36, AV36P, and AV52 SKUs for Zerto DR. | N/A |
In this article, you learned about the current known issues with the Azure VMware Solution.
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
# Deploy Zerto disaster recovery on Azure VMware Solution
+> [!IMPORTANT]
+> **Temporary pause on new onboarding for Zerto on Azure VMware Solution**
+>
+> Due to ongoing security enhancements and ongoing development work on the Linux version for Azure VMware Solution Run Command and migration activities, we are currently not onboarding new customers for Zerto on Azure VMware Solution. These efforts include transitioning to Linux-based run command, meeting the security requirements to operate the Zerto Linux appliance, and migrating existing customers to latest Zerto version. This pause will be in effect until August 6, 2024.
+>
+>Please Note: Existing customers will continue to receive full support as usual. For further information regarding the timeline and future onboarding availability, please reach out to your Zerto account team.
+>
+>Thank you for your understanding and cooperation.
++
+> [!IMPORTANT]
+> AV64 node type does not support Zerto Disaster Recovery at the moment. You can contact your Zerto account team to get more information and an estimate of when this will be available.
++ In this article, learn how to implement disaster recovery for on-premises VMware or Azure VMware Solution-based virtual machines (VMs). The solution in this article uses [Zerto disaster recovery](https://www.zerto.com/solutions/use-cases/disaster-recovery/). Instances of Zerto are deployed at both the protected and the recovery sites. Zerto is a disaster recovery solution designed to minimize downtime of VMs should a disaster occur. Zerto's platform is built on the foundation of Continuous Data Protection (CDP) that enables minimal or close to no data loss. The platform provides the level of protection wanted for many business-critical and mission-critical enterprise applications. Zerto also automates and orchestrates failover and failback to ensure minimal downtime in a disaster. Overall, Zerto simplifies management through automation and ensures fast and highly predictable recovery times.
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
Ensure you provide the remote user the VMware VRM administrator and VMware SRM a
> [!NOTE]
-> The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.5.0.3.
-
+> The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.7.0.3.
1. From the **Disaster Recovery Solution** drop-down, select **VMware Site Recovery Manager (SRM) ΓÇô vSphere Replication**. :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png" alt-text="Screenshot showing the Disaster recovery tab under Add-ons with VMware Site Recovery Manager (SRM) - vSphere replication selected." border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png":::
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-create-instance.md
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
## Create a resource using Bicep template ## Review the Bicep file
azure-web-pubsub Quickstart Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-bicep-template.md
This quickstart describes how to use Bicep to create an Azure Web PubSub service using Azure CLI or PowerShell. ## Prerequisites
azure-web-pubsub Quickstart Cli Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-cli-create.md
ms.devlang: azurecli
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. This quickstart shows you the options to create Azure Web PubSub instance with the Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-web-pubsub Quickstart Cli Try https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-cli-try.md
ms.devlang: azurecli
This quickstart shows you how to connect to the Azure Web PubSub instance and publish messages to the connected clients using the [Azure CLI](/cli/azure). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-web-pubsub Quickstart Live Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-live-demo.md
Last updated 11/08/2021
This quickstart shows you how to get started easily with a [Pub/Sub live demo](https://aka.ms/awps/quicktry). [!INCLUDE [create-instance-portal](includes/create-instance-portal.md)]
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
In this tutorial, you learn how to:
[!INCLUDE [create-instance-portal](includes/create-instance-portal.md)]
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-build-chat.md
In this tutorial, you learn how to:
> * Configure event handler settings for Azure Web PubSub > * Hanlde events in the app server and build a real-time chat app [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
In this tutorial, you learn how to:
* The [Azure CLI](/cli/azure) to manage Azure resources. ## Create an IoT hub
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
In this tutorial, you learn how to:
[!INCLUDE [create-instance-portal](includes/create-instance-portal.md)]
azure-web-pubsub Tutorial Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-subprotocol.md
In this tutorial, you learn how to:
> * Generate the full URL to establish the WebSocket connection > * Publish messages between WebSocket clients using subprotocol [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
backup Azure Kubernetes Service Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md
Backup in AKS has two types of hooks:
- Backup hooks - Restore hooks
-## Modify resource while restoring backups to AKS cluster
-
-You can use the *Resource Modification* feature to modify backed-up Kubernetes resources during restore by specifying *JSON* patches as `configmap` deployed in the AKS cluster.
-
-### Create and apply a resource modifier configmap during restore
-
-To create and apply resource modification, follow these steps:
-
-1. Create resource modifiers configmap.
-
- You need to create one configmap in your preferred namespace from a *YAML* file that defined resource modifiers.
-
- **Example for creating command**:
-
- ```json
- version: v1
- resourceModifierRules:
- - conditions:
- groupResource: persistentvolumeclaims
- resourceNameRegex: "^mysql.*$"
- namespaces:
- - bar
- - foo
- labelSelector:
- matchLabels:
- foo: bar
- patches:
- - operation: replace
- path: "/spec/storageClassName"
- value: "premium"
- - operation: remove
- path: "/metadata/labels/test"
-
- ```
-
- - The above *configmap* applies the *JSON* patch to all the Persistent Volume Copies in the *namespaces* bar and *foo* with name that starts with `mysql` and `match label foo: bar`. The JSON patch replaces the `storageClassName` with `premium` and removes the label `test` from the Persistent Volume Copies.
- - Here, the *Namespace* is the original namespace of the backed-up resource, and not the new namespace where the resource is going to be restored.
- - You can specify multiple JSON patches for a particular resource. The patches are applied as per the order specified in the *configmap*. A subsequent patch is applied in order. If multiple patches are specified for the same path, the last patch overrides the previous patches.
- - You can specify multiple `resourceModifierRules` in the *configmap*. The rules are applied as per the order specified in the *configmap*.
--
-2. Creating a resource modifier reference in the restore configuration
-
- When you perform a restore operation, provide the *ConfigMap name* and the *Namespace* where it's deployed as part of restore configuration. These details need to be provided under **Resource Modifier Rules**.
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/resource-modifier-rules.png" alt-text="Screenshot shows the location to provide resource details." lightbox="./media/azure-kubernetes-service-backup-overview/resource-modifier-rules.png":::
--
- Operations supported by **Resource Modifier**
-
- - **Add**
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/add-resource-modifier.png" alt-text="Screenshot shows the addition of resource modifier. ":::
-
- - **Remove**
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/remove-resource-modifier.png" alt-text="Screenshot shows the option to remove resource.":::
-
- - **Replace**
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/replace-resource-modifier.png" alt-text="Screenshot shows the replacement option for resource modifier.":::
-
- - **Move**
- - **Copy**
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/copy-resource-modifier.png" alt-text="Screenshot shows the option to copy resource modifier.":::
-
- - **Test**
-
- You can use the **Test** operation to check if a particular value is present in the resource. If the value is present, the patch is applied. If the value isn't present, the patch isn't applied.
-
- :::image type="content" source="./media/azure-kubernetes-service-backup-overview/test-resource-modifier-value-present.png" alt-text="Screenshot shows the option to test if the resource value modifier is present.":::
-
-### JSON patch
-
-This *configmap* applies the JSON patch to all the deployments in the namespaces by default and `nginx` with the name that starts with `nginxdep`. The JSON patch updates the replica count to *12* for all such deployments.
--
-```json
-resourceModifierRules:
-- conditions:
-groupResource: deployments.apps
-resourceNameRegex: "^nginxdep.*$"
-namespaces:
-- default-- nginx
-patches:
-- operation: replace
-path: "/spec/replicas"
-value: "12"
-
-```
--- **JSON Merge patch**: This config map will apply the JSON Merge Patch to all the deployments in the namespaces default and nginx with the name starting with nginxdep. The JSON Merge Patch will add/update the label "app" with the value "nginx1".-
-```json
--
-version: v1
-resourceModifierRules:
- - conditions:
- groupResource: deployments.apps
- resourceNameRegex: "^nginxdep.*$"
- namespaces:
- - default
- - nginx
- mergePatches:
- - patchData: |
- {
- "metadata" : {
- "labels" : {
- "app" : "nginx1"
- }
- }
- }
--
-```
--- **Strategic Merge patch**: This config map will apply the Strategic Merge Patch to all the pods in the namespace default with the name starting with nginx. The Strategic Merge Patch will update the image of container nginx to mcr.microsoft.com/cbl-mariner/base/nginx:1.22-
-```json
-
-version: v1
-resourceModifierRules:
-- conditions:
- groupResource: pods
- resourceNameRegex: "^nginx.*$"
- namespaces:
- - default
- strategicPatches:
- - patchData: |
- {
- "spec": {
- "containers": [
- {
- "name": "nginx",
- "image": "mcr.microsoft.com/cbl-mariner/base/nginx:1.22"
- }
- ]
- }
- }
-
-```
- ### Backup hooks In a backup hook, you can configure the commands to run the hook before any custom action processing (pre-hooks), or after all custom actions are finished and any additional items specified by custom actions are backed up (post-hooks).
spec:
Learn [how to use hooks during AKS backup](azure-kubernetes-service-cluster-backup.md#use-hooks-during-aks-backup).
+ > [!NOTE]
+ > - During restore, backup extension waits for container to come up and then executes exec commands on them, defined in the restore hooks.
+ > - In case you are performing restore to the same namespace that was backed up, the restore hooks will not be executed as it only looks for new container that gets spawned. This is regardless of whether skip or patch policy is opted.
+++
+## Modify resource while restoring backups to AKS cluster
+
+You can use the *Resource Modification* feature to modify backed-up Kubernetes resources during restore by specifying *JSON* patches as `configmap` deployed in the AKS cluster.
+
+### Create and apply a resource modifier configmap during restore
+
+To create and apply resource modification, follow these steps:
+
+1. Create resource modifiers configmap.
+
+ You need to create one configmap in your preferred namespace from a *YAML* file that defined resource modifiers.
+
+ **Example for creating command**:
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: persistentvolumeclaims
+ resourceNameRegex: "^mysql.*$"
+ namespaces:
+ - bar
+ - foo
+ labelSelector:
+ matchLabels:
+ foo: bar
+ patches:
+ - operation: replace
+ path: "/spec/storageClassName"
+ value: "premium"
+ - operation: remove
+ path: "/metadata/labels/test"
+ ```
+
+ - The above *configmap* applies the *JSON* patch to all the Persistent Volume Copies in the *namespaces* bar and *foo* with name that starts with `mysql` and `match label foo: bar`. The JSON patch replaces the `storageClassName` with `premium` and removes the label `test` from the Persistent Volume Copies.
+ - Here, the *Namespace* is the original namespace of the backed-up resource, and not the new namespace where the resource is going to be restored.
+ - You can specify multiple JSON patches for a particular resource. The patches are applied as per the order specified in the *configmap*. A subsequent patch is applied in order. If multiple patches are specified for the same path, the last patch overrides the previous patches.
+ - You can specify multiple `resourceModifierRules` in the *configmap*. The rules are applied as per the order specified in the *configmap*.
++
+2. Creating a resource modifier reference in the restore configuration
+
+ When you perform a restore operation, provide the *ConfigMap name* and the *Namespace* where it's deployed as part of restore configuration. These details need to be provided under **Resource Modifier Rules**.
+
+ :::image type="content" source="./media/azure-kubernetes-service-backup-overview/resource-modifier-rules.png" alt-text="Screenshot shows the location to provide resource details." lightbox="./media/azure-kubernetes-service-backup-overview/resource-modifier-rules.png":::
++
+ ### Operations supported by **Resource Modifier**
+
+- **Add**
+
+ You can use the **Add** operation to add a new block to the resource json. In the example below, the operation add a new container details to the spec with a deployment.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: deployments.apps
+ resourceNameRegex: "^test-.*$"
+ namespaces:
+ - bar
+ - foo
+ patches:
+ # Dealing with complex values by escaping the yaml
+ - operation: add
+ path: "/spec/template/spec/containers/0"
+ value: "{\"name\": \"nginx\", \"image\": \"nginx:1.14.2\", \"ports\": [{\"containerPort\": 80}]}"
+ ```
+
+
+- **Remove**
+
+ You can use the **Remove** operation to remove a key from the resource json. In the example below, the operation removes the label with test as key.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: persistentvolumeclaims
+ resourceNameRegex: "^mysql.*$"
+ namespaces:
+ - bar
+ - foo
+ labelSelector:
+ matchLabels:
+ foo: bar
+ patches:
+ - operation: remove
+ path: "/metadata/labels/test"
+ ```
+
+- **Replace**
+
+ You can use the **Replace** operation to replace a value for the path mentioned to an alternate one. In the example below, the operation replaces the storageClassName in the persistent volume claim with premium.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: persistentvolumeclaims
+ resourceNameRegex: "^mysql.*$"
+ namespaces:
+ - bar
+ - foo
+ labelSelector:
+ matchLabels:
+ foo: bar
+ patches:
+ - operation: replace
+ path: "/spec/storageClassName"
+ value: "premium"
+ ```
+
+- **Copy**
+
+ You can use the **Copy** operation to copy a value from one path from the resources defined to another path.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: deployments.apps
+ resourceNameRegex: "^test-.*$"
+ namespaces:
+ - bar
+ - foo
+ patches:
+ - operation: copy
+ from: "/spec/template/spec/containers/0"
+ path: "/spec/template/spec/containers/1"
+ ```
+
+- **Test**
+
+ You can use the **Test** operation to check if a particular value is present in the resource. If the value is present, the patch is applied. If the value isn't present, the patch isn't applied. In the example below, the operation checks whether the persistent volume claims have premium as StorageClassName and replaces if with standard, if true.
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: persistentvolumeclaims
+ resourceNameRegex: ".*"
+ namespaces:
+ - bar
+ - foo
+ patches:
+ - operation: test
+ path: "/spec/storageClassName"
+ value: "premium"
+ - operation: replace
+ path: "/spec/storageClassName"
+ value: "standard"
+ ```
+
+- **JSON Patch**
+
+ This *configmap* applies the JSON patch to all the deployments in the namespaces by default and ``nginx` with the name that starts with `nginxdep`. The JSON patch updates the replica count to *12* for all such deployments.
+
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: deployments.apps
+ resourceNameRegex: "^nginxdep.*$"
+ namespaces:
+ - default
+ - nginx
+ patches:
+ - operation: replace
+ path: "/spec/replicas"
+ value: "12"
+ ```
+
+- **JSON Merge Patch**
+
+ This config map will apply the JSON Merge Patch to all the deployments in the namespaces default and nginx with the name starting with nginxdep. The JSON Merge Patch will add/update the label "app" with the value "nginx1".
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: deployments.apps
+ resourceNameRegex: "^nginxdep.*$"
+ namespaces:
+ - default
+ - nginx
+ mergePatches:
+ - patchData: |
+ {
+ "metadata" : {
+ "labels" : {
+ "app" : "nginx1"
+ }
+ }
+ }
+ ```
+
+- **Strategic Merge Patch**
+
+ This config map will apply the Strategic Merge Patch to all the pods in the namespace default with the name starting with nginx. The Strategic Merge Patch will update the image of container nginx to mcr.microsoft.com/cbl-mariner/base/nginx:1.22
+
+ ```json
+ version: v1
+ resourceModifierRules:
+ - conditions:
+ groupResource: pods
+ resourceNameRegex: "^nginx.*$"
+ namespaces:
+ - default
+ strategicPatches:
+ - patchData: |
+ {
+ "spec": {
+ "containers": [
+ {
+ "name": "nginx",
+ "image": "mcr.microsoft.com/cbl-mariner/base/nginx:1.22"
+ }
+ ]
+ }
+ }
+ ```
+ ## Which backup storage tier does AKS backup support? Azure Backup for AKS supports two storage tiers as backup datastores:
backup Azure Kubernetes Service Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-troubleshoot.md
These error codes appear due to issues on the Backup Extension installed in the
**Recommended action**: The health of the extension is required to be verified via running the command `kubectl get pods -n dataprotection.microsoft`. If the pods aren't in running state, then increase the number of nodes in the cluster by *1* or increase the compute limits. Then wait for a few minutes and run the command again, which should change the state of the pods to *running*. If the issue persists, delete and reinstall the extension.
+### BackupPluginPodRestartedDuringBackupError
+
+**Cause**: Backup Extension Pod (dataprotection-microsoft-kubernetes-agent) in your AKS cluster experiencing instability due to insufficient CPU/Memory resources on its current node, leading to OOM (Out of Memory) kill incidents. This could be because of lower compute requested by the backup extension pod.
+
+**Recommended action**: To address this, we recommend increasing the compute values allocated to this pod. By doing so, it will be automatically provisioned on a different node within your AKS cluster with ample compute resources available.
+
+The current value of compute for this pod is:
+
+resources.requests.cpu is 500m
+resources.requests.memory is 128Mi
+Kindly modify the memory allocation to 512Mi by updating the 'resources.requests.memory' parameter. If the issue persists, it is advisable to increase the 'resources.requests.cpu' parameter to 900m, post the memory allocation. You can increase the values for the parameters by following below steps:
+
+1. Navigate to the AKS cluster blade in the Azure portal.
+2. Click on "Extensions+Applications" and select "azure-aks-backup" extension.
+3. Update the configuration settings in the portal by adding the following key-value pair.
+ resources.requests.cpu 900m
+ resources.requests.memory 512Mi
+ ### BackupPluginDeleteBackupOperationFailed **Cause**: The Backup extension should be running to delete the backups.
These error codes appear due to issues based on the Backup extension installed i
**Recommended action**: The error appears if the Extension Identity doesn't have right permissions to access the storage account. This error appears if AKS backup extension is installed the first time when configuring protection operation. This happens for the time taken for the granted permissions to propagate to the AKS backup extension. As a workaround, wait an hour and retry the protection configuration. Otherwise, use Azure portal or CLI to reassign this missing permission on the storage account.
+### UserErrorSnapshotResourceGroupHasLocks
+
+**Cause**: This error code appears when a Delete or Read Lock has been applied on the Snapshot Resource Group provided as input for Backup Extension.
+
+**Recommended action**: In case if you are configuring a new backup instance, use a resource group without a delete or read lock. If the backup instance already configured then remove the lock from the snapshot resource group.
+ ## Vaulted backup based errors
-This error code can appear while you enable AKS backup to store backups in a vault standard datastore.
+These error codes can appear while you enable AKS backup to store backups in a vault standard datastore.
### DppUserErrorVaultTierPolicyNotSupported
backup Azure Kubernetes Service Cluster Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md
To configure backups for AKS cluster:
### Backup configurations
-As part of the AKS backup capability, you can back up all cluster resources or specific cluster resources. You can use the filters that are available for backup configuration to choose the resources to back up. The defined backup configurations are referenced by the values for **Backup Instance Name**. You can use the following options to choose the **Namespaces** values to back up:
+Azure Backup for AKS allows you to define the application boundary within AKS cluster that you want to back up. You can use the filters that are available within backup configurations to choose the resources to back up and also to run custom hooks. The defined backup configuration is referenced by the value for **Backup Instance Name**. The below filters are available to define your application boundary:
-- **All (including future Namespaces)**: This backs up all current and future values for **Namespaces** when the underlying cluster resources are backed up.-- **Choose from list**: Select the specific values for **Namespaces** in the AKS cluster to back up.
+1. **Select Namespaces to backup**, you can either select **All** to back up all existing and future namespaces in the cluster, or you can select **Choose from list** to select specific namespaces for backup.
- To select specific cluster resources to back up, you can use labels that are attached to the resources to include the resources in the backup. Only the resources that have the labels that you enter are backed up. You can use multiple labels.
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/backup-instance-name.png" alt-text="Screenshot that shows how to select namespaces to include in the backup." lightbox="./media/azure-kubernetes-service-cluster-backup/backup-instance-name.png":::
+
+2. Expand **Additional Resource Settings** to see filters that you can use to choose cluster resources to back up. You can choose to back up resources based on the following categories:
+
+ - **Labels**: You can filter AKS resources by using [labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) that you assign to types of resources. Enter labels in the form of key/value pairs. Combine multiple labels by using `AND` logic.
+
+ For example, if you enter the labels `env=prod;tier!=web`, the process selects resources that have a label with the `env` key and the `prod` value, and a label with the `tier` key for which the value isn't `web`.
+
+ - **API groups**: You can also include resources by providing the AKS API group and kind. For example, you can choose for backup AKS resources like Deployments. You can access the list of Kubernetes defined API Groups [here](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.30/).
+
+ - **Other options**: You can enable or disable backup for cluster-scoped resources, persistent volumes, and secrets. By default, cluster-scoped resources and persistent volumes are enabled
+
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/cluster-scope-resources.png" alt-text="Screenshot that shows the Additional Resource Settings pane." lightbox="./media/azure-kubernetes-service-cluster-backup/cluster-scope-resources.png":::
+
+ > [!NOTE]
+ > All these resource settings are combined and applied via `AND` logic.
> [!NOTE] > You should add the labels to every single YAML file that is deployed and to be backed up. This includes namespace-scoped resources like persistent volume claims, and cluster-scoped resources like persistent volumes.
- If you also want to back up cluster-scoped resources, secrets, and persistent volumes, select the items under **Other Options**.
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/various-backup-configurations.png" alt-text="Screenshot that shows various backup configurations."::: ## Use hooks during AKS backup
backup Backup Azure Afs Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-afs-automation.md
This article explains how to:
## Set up PowerShell > [!NOTE] > Azure PowerShell currently doesn't support backup policies with hourly schedule. Please use Azure Portal to leverage this feature. [Learn more](manage-afs-backup.md#create-a-new-policy)
backup Backup Azure Restore Key Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-key-secret.md
This article talks about using Azure VM Backup to perform restore of encrypted Azure VMs, if your key and secret don't exist in the key vault. These steps can also be used if you want to maintain a separate copy of the key (Key Encryption Key) and secret (BitLocker Encryption Key) for the restored VM. ## Prerequisites
backup Backup Azure Troubleshoot Slow Backup Performance Issue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-slow-backup-performance-issue.md
updates to the Backup agent to fix various issues, add features, and improve per
We also strongly recommend that you review the [Azure Backup service FAQ](backup-azure-backup-faq.yml) to make sure you're not experiencing any of the common configuration issues. ## Cause: Backup job running in unoptimized mode
backup Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md
This article provides troubleshooting steps that can help you resolve Azure Backup errors related to communication with the VM agent and extension. ## Step-by-step guide to troubleshoot backup failures
backup Backup Azure Vms Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-automation.md
Review the **Az.RecoveryServices** [cmdlet reference](/powershell/module/az.reco
## Set up and register To begin:
backup Backup Client Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-client-automation.md
This article shows you how to use PowerShell to set up Azure Backup on Windows S
## Install Azure PowerShell To get started, [install the latest PowerShell release](/powershell/azure/install-azure-powershell).
backup Backup Dpm Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-dpm-automation.md
Sample DPM scripts: Get-DPMSampleScript
## Setup and Registration To begin, [download the latest Azure PowerShell](/powershell/azure/install-azure-powershell).
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Other support matrices are available:
- Support matrix for backup by using [System Center Data Protection Manager (DPM)/Microsoft Azure Backup Server (MABS)](backup-support-matrix-mabs-dpm.md) - Support matrix for backup by using the [Microsoft Azure Recovery Services (MARS) agent](backup-support-matrix-mars-agent.md) ## Vault support
backup Disk Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-troubleshoot.md
Recommended Action: Create the resource group and provide the required permissio
Error Message: Could not perform the operation as Managed Disk no longer exists.
-Recommended Action: The backups will continue to fail as the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it's deleted by mistake. If the disk is moved to a different location, configure backup for the disk.
+Recommended Action: The backups are failing because the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it is deleted by mistake. If the disk is moved to a different location, configure backup for the disk.
+
+### UserErrorSnapshotResourceGroupHasLocks
+
+Error Message: This error code appears when a Delete or Read Lock has been applied on the Snapshot Resource Group provided as input for Backup Extension.
+
+Recommended Action: In case if you are configuring a new backup instance, use a resource group without a delete or read lock. If the backup instance already configured then remove the lock from the snapshot resource group.
### Error Code: UserErrorNotEnoughPermissionOnDisk Error Message: Azure Backup Service requires additional permissions on the Disk to do this operation.
-Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup Vault managed identity and how to provide it.
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are needed to be assigned to the Backup Vault managed identity and how to provide it.
### Error Code: UserErrorNotEnoughPermissionOnSnapshotRG Error Message: Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
-Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand which is the resource group, what permissions are required by the Backup Vault managed identity and how to provide it.
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup Vault managed identity over the resource group and how to provide them.
### Error Code: UserErrorDiskBackupDiskOrMSIPermissionsNotPresent Error Message: Invalid disk or Azure Backup Service requires additional permissions on the Disk to do this operation
-Recommended Action: The backups will continue to fail as the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it's deleted by mistake. If the disk is moved to a different location, configure backup for the disk. If the disk isn't deleted or moved, grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity and how to provide it.
+Recommended Action: The backups are failing as the source disk may be deleted or moved to a different location. Use the existing restore point to restore the disk if it deleted by mistake. If the disk is moved to a different location, configure backup for the disk. If the disk isn't deleted or moved, grant the Backup vault's managed identity the appropriate permissions on the disk. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity.
### Error Code: UserErrorDiskBackupSnapshotRGOrMSIPermissionsNotPresent Error Message: Could not perform the operation as Snapshot Data store Resource Group no longer exists. Or Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
-Recommended Action: Create a resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what is the resource group, what permissions are required by the Backup vault's managed identity and how to provide it.
+Recommended Action: Create a resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the disk snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity over the resource group.
### Error Code: UserErrorDiskBackupAuthorizationFailed Error Message: Backup Vault managed identity is missing the necessary permissions to do this operation.
-Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk to be backed up and on the snapshot data store resource group where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity and how to provide it.
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the disk to be backed up and on the snapshot data store resource group where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity.
### Error Code: UserErrorSnapshotRGOrMSIPermissionsNotPresent Error Message: Could not perform the operation as Snapshot Data store Resource Group no longer exists. Or, Azure Backup Service requires additional permissions on the Snapshot Data store Resource Group to do this operation.
-Recommended Action: Create the resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what is the resource group, what permissions are required by the Backup vault's managed identity, and how to provide it.
+Recommended Action: Create the resource group and grant the Backup vault's managed identity the appropriate permissions on the snapshot data store resource group. The snapshot data store resource group is the location where the snapshots are stored. Refer to [the documentation](backup-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity over resource group.
### Error Code: UserErrorOperationalStoreParametersNotProvided
Recommended Action: Provide a valid resource group to restore. For more informat
Error Message: Azure Backup Service requires additional permissions on the Target Resource Group to do this operation.
-Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity, and how to provide it.
+Recommended Action: Grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are to be assigned to the Backup vault's managed identity.
### Error Code: UserErrorSubscriptionDiskQuotaLimitReached
-Error Message: Operation has failed as the Disk quota maximum limit has been reached on the subscription.
+Error Message: Operation is failed as the maximum limit for disk quota is reached for the subscription.
Recommended Action: Refer to the [Azure subscription and service limits and quota documentation](../azure-resource-manager/management/azure-subscription-service-limits.md) or contact Microsoft Support for further guidance. ### Error Code: UserErrorDiskBackupRestoreRGOrMSIPermissionsNotPresent
-Error Message: Operation failed as the Target Resource Group does not exist. Or Azure Backup Service requires additional permissions on the Target Resource Group to do this operation.
+Error Message: Operation failed as the Target Resource Group doesn't exist. Or Azure Backup Service requires additional permissions on the Target Resource Group to do this operation.
-Recommended Action: Provide a valid resource group to restore, and grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are required by the Backup vault's managed identity, and how to provide it.
+Recommended Action: Provide a valid resource group to restore, and grant the Backup vault's managed identity the appropriate permissions on the target resource group. The target resource group is the selected location where the disk is to be restored. Refer to the [restore documentation](restore-managed-disks.md) to understand what permissions are required to be assigned to the Backup vault's managed identity.
### Error Code: UserErrorDESKeyVaultKeyDisabled
Recommended Action: Ensure that the key vault key used for disk encryption set i
### Error Code: UserErrorDiskSnapshotNotFound
-Error Message: The disk snapshot for this Restore point has been deleted.
+Error Message: The disk snapshot for this Restore point is not accessible.
-Recommended Action: Snapshots are stored in the snapshot data store resource group within your subscription. It's possible that the snapshot related to the selected restore point might have been deleted or moved from this resource group. Consider using another Recovery point to restore. Also, follow the recommended guidelines for choosing Snapshot resource group mentioned in the [restore documentation](restore-managed-disks.md).
+Recommended Action: Snapshots are stored in the snapshot data store resource group within your subscription. The snapshot related to the selected restore point is either deleted or moved from this resource group. Consider using another Recovery point to restore. Also, follow the recommended guidelines for choosing Snapshot resource group mentioned in the [restore documentation](restore-managed-disks.md).
### Error Code: UserErrorSnapshotMetadataNotFound
-Error Message: The disk snapshot metadata for this Restore point has been deleted
+Error Message: The disk snapshot metadata for this Restore point is deleted
Recommended Action: Consider using another recovery point to restore. For more information, see the [restore documentation](restore-managed-disks.md).
Recommended Action: Consider using another recovery point to restore. For more i
Error Message: Disk Backup is not yet available in the region of the Backup Vault under which Configure Protection is being tried.
-Recommended Action: Backup Vault must be in a supported region. For region availability see the [the support matrix](disk-backup-support-matrix.md).
+Recommended Action: Backup Vault must be in a supported region. For region availability, see the [the support matrix](disk-backup-support-matrix.md).
### Error Code: UserErrorDppDatasourceAlreadyHasBackupInstance
-Error Message: The disk you are trying to configure backup is already being protected. Disk is already associated with a backup instance in a Backup vault.
+Error Message: The disk you're trying to configure backup is already being protected. Disk is already associated with a backup instance in a Backup vault.
-Recommended Action: This disk is already associated with a backup instance in a Backup vault. If you want to re-protect this disk, then delete the backup instance from the Backup vault where it's currently protected and re-protect the disk in any other vault.
+Recommended Action: This disk is already associated with a backup instance in a Backup vault. If you want to reprotect this disk, then delete the backup instance from the Backup vault where it was protected and reprotect the disk in any other vault.
### Error Code: UserErrorDppDatasourceAlreadyProtected
-Error Message: The disk you are trying to configure backup is already being protected. Disk is already associated with a backup instance in a Backup vault.
+Error Message: The disk you're trying to configure backup is already being protected. Disk is already associated with a backup instance in a Backup vault.
-Recommended Action: This disk is already associated with a backup instance in a Backup vault. If you want to re-protect this disk, then delete the backup instance from the Backup vault where it is currently protected and re-protect the disk in any other vault.
+Recommended Action: This disk is already associated with a backup instance in a Backup vault. If you want to reprotect this disk, then delete the backup instance from the Backup vault where it is currently protected and reprotect the disk in any other vault.
### Error Code: UserErrorMaxConcurrentOperationLimitReached
-Error Message: Unable to start the operation as maximum number of allowed concurrent backups has reached.
+Error Message: Unable to start the operation as maximum number of allowed concurrent backups is reached.
Recommended Action: Wait until the previous running backup completes. ### Error Code: UserErrorMissingSubscriptionRegistration
-Error Message: The subscription is not registered to use namespace ΓÇÿMicrosoft.ComputeΓÇÖ.
+Error Message: The subscription isn't registered to use namespace Microsoft.Compute
-Recommended Action: The required resource provider hasn't been registered for your subscription. Register both the resource providers' namespace (_Microsoft.Compute_ and _Microsoft.Storage_) using the steps in [Solution 3](../azure-resource-manager/templates/error-register-resource-provider.md#solution-3azure-portal).
+Recommended Action: The required resource provider is not registered for your subscription. Register both the resource providers' namespace (_Microsoft.Compute_ and _Microsoft.Storage_) using the steps in [Solution 3](../azure-resource-manager/templates/error-register-resource-provider.md#solution-3azure-portal).
## Next steps
-[Azure Disk Backup support matrix](disk-backup-support-matrix.md)
+[Azure Disk Backup support matrix](disk-backup-support-matrix.md)
backup Quick Backup Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-powershell.md
This quickstart enables backup on an existing Azure VM. If you need to create a
This quickstart requires the Azure PowerShell AZ module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). ## Sign in and register
backup Quick Backup Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-template.md
[Azure Backup](backup-overview.md) backs up on-premises machines and apps, and Azure VMs. This article shows you how to back up an Azure VM with an Azure Resource Manager template (ARM template) and Azure PowerShell. This quickstart focuses on the process of deploying an ARM template to create a Recovery Services vault. For more information on developing ARM templates, see the [Azure Resource Manager documentation](../azure-resource-manager/index.yml) and the [template reference](/azure/templates/microsoft.recoveryservices/allversions). A [Recovery Services vault](backup-azure-recovery-services-vault-overview.md) is a logical container that stores backup data for protected resources, such as Azure VMs. When a backup job runs, it creates a recovery point inside the Recovery Services vault. You can then use one of these recovery points to restore data to a given point in time. Alternatively, you can back up a VM using [Azure PowerShell](./quick-backup-vm-powershell.md), the [Azure CLI](quick-backup-vm-cli.md), or in the [Azure portal](quick-backup-vm-portal.md).
backup Backup Powershell Sample Backup Encrypted Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/backup-powershell-sample-backup-encrypted-vm.md
This script creates a Recovery Services vault with geo-redundant storage (GRS) for an encrypted Azure virtual machine. The default protection policy is applied to the vault. The policy generates a daily backup for the virtual machine, and retains each backup for 365 days. The script also triggers the initial recovery point for the virtual machine and retains that recovery point for 30 days. ## Sample script [!code-powershell[main](../../../powershell_scripts/backup/backup-encrypted-vm/backup-encrypted-vm.ps1 "Back up encrypted virtual machine")]
backup Tutorial Backup Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-azure-vm.md
# Back up Azure VMs with PowerShell This tutorial describes how to deploy an [Azure Backup](backup-overview.md) Recovery Services vault to back up multiple Azure VMs using PowerShell.
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
You can use the following example values when creating this configuration, or yo
This section helps you create a virtual network, subnets, and deploy Azure Bastion using Azure PowerShell. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
> 1. Create a resource group, a virtual network, and a front end subnet to which you deploy the VMs that you'll connect to via Bastion. If you're running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command.
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
No, Bastion connectivity to Azure Virtual Desktop isn't supported.
Review any error messages and [raise a support request in the Azure portal](../azure-portal/supportability/how-to-create-azure-support-request.md) as needed. Deployment failures can result from [Azure subscription limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). Specifically, customers might encounter a limit on the number of public IP addresses allowed per subscription that causes the Azure Bastion deployment to fail.
-### <a name="dr"></a>How do I incorporate Azure Bastion in my Disaster Recovery plan?
-
-Azure Bastion is deployed within virtual networks or peered virtual networks, and is associated to an Azure region. You're responsible for deploying Azure Bastion to a Disaster Recovery (DR) site virtual network. If there's an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
- ### <a name="move-virtual-network"></a>Does Bastion support moving a VNet to another resource group? No. If you move your virtual network to another resource group (even if it's in the same subscription), you'll need to first delete Bastion from virtual network, and then proceed to move the virtual network to the new resource group. Once the virtual network is in the new resource group, you can deploy Bastion to the virtual network.
-### <a name="zone-redundant"></a>Does Bastion support zone redundancies?
-Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions might or might not be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies.
### <a name="azure-ad-guests"></a>Does Bastion support Microsoft Entra guest accounts?
bastion Create Host Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/create-host-cli.md
Verify that you have an Azure subscription. If you don't already have an Azure s
This section helps you deploy Azure Bastion using Azure CLI. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
> 1. If you don't already have a virtual network, create a resource group and a virtual network using [az group create](/cli/azure/group#az-group-create) and [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create).
bastion Native Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/native-client.md
This article helps you configure your Bastion deployment to accept connections f
You can configure this feature by modifying an existing Bastion deployment, or you can deploy Bastion with the feature configuration already specified. Your capabilities on the VM when connecting via native client are dependent on what is enabled on the native client. >[!NOTE]
->[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+>[!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
## Deploy Bastion with the native client feature
bastion Private Only Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/private-only-deployment.md
You can use the following example values when creating this configuration, or yo
This section helps you deploy Bastion as private-only to your virtual network. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
1. Sign in to the [Azure portal](https://portal.azure.com) and go to your virtual network. If you don't already have one, you can [create a virtual network](../virtual-network/quick-create-portal.md). If you're creating a virtual network for this exercise, you can create the AzureBastionSubnet (from the next step) at the same time you create your virtual network.
bastion Quickstart Host Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-arm-template.md
By default, this template creates a Bastion deployment with a resource group, a
## Deploy the template > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
In this section, you deploy Bastion by using the Azure portal. You don't connect and sign in to your virtual machine or deploy Bastion directly from your VM.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
The steps in this article help you:
* Remove your VM's public IP address if you don't need it for anything else. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
## <a name="prereq"></a>Prerequisites
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
You can use the following example values when creating this configuration, or yo
This section helps you deploy Bastion to your virtual network. After Bastion is deployed, you can connect securely to any VM in the virtual network using its private IP address. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
1. Sign in to the [Azure portal](https://portal.azure.com).
bastion Upgrade Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/upgrade-sku.md
This article helps you view and upgrade your Bastion SKU. Once you upgrade, you can't revert back to a lower SKU without deleting and reconfiguring Bastion. For more information about features and SKUs, see [Configuration settings](configuration-settings.md). ## View a SKU
batch Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/accounts.md
Title: Batch accounts and Azure Storage accounts description: Learn about Azure Batch accounts and how they're used from a development standpoint. Previously updated : 04/04/2024 Last updated : 06/25/2024 # Batch accounts and Azure Storage accounts
An Azure Batch account is a uniquely identified entity within the Batch service.
## Batch accounts
-All processing and resources are associated with a Batch account. When your application makes a request against the Batch service, it authenticates the request using the Azure Batch account name and the account URL. Additionally, it can use either an access key or a Microsoft Entra token.
+All processing and resources such as tasks, job and batch pool are associated with a Batch account. When your application makes a request against the Batch service, it authenticates the request using the Azure Batch account name and the account URL. Additionally, it can use either an access key or a Microsoft Entra token.
You can run multiple Batch workloads in a single Batch account. You can also distribute your workloads among Batch accounts that are in the same subscription but located in different Azure regions.
batch Batch Aad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth.md
Title: Authenticate Azure Batch services with Microsoft Entra ID description: Learn how to authenticate Azure Batch service applications with Microsoft Entra ID by using integrated authentication or a service principal. Previously updated : 04/03/2023 Last updated : 06/25/2024
Azure Batch supports authentication with [Microsoft Entra ID](/azure/active-dire
This article describes two ways to use Microsoft Entra authentication with Azure Batch: -- **Integrated authentication** authenticates a user who's interacting with an application. The application gathers a user's credentials and uses those credentials to authorize access to Batch resources.
+- **Integrated authentication** authenticates a user who's interacting with an application. The application gathers a user's credentials and uses those credentials to authenticate access to Batch resources.
- A **service principal** authenticates an unattended application. The service principal defines the policy and permissions for the application and represents the application to access Batch resources at runtime.
batch Batch Apis Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-apis-tools.md
Title: APIs and tools for developers description: Learn about the APIs and tools available for developing solutions with the Azure Batch service. Previously updated : 06/13/2024 Last updated : 06/26/2024
For example, the [Batch service API to delete a pool](/rest/api/batchservice/poo
Whereas the [Batch management API to delete a pool](/rest/api/batchmanagement/pool/delete) is targeted at the management.azure.com layer: `DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Batch/batchAccounts/{accountName}/pools/{poolName}`
-## Batch service APIs
+## Batch Service APIs
Your applications and services can issue direct REST API calls or use one or more of the following client libraries to run and manage your Azure Batch workloads.
The Azure Resource Manager APIs for Batch provide programmatic access to Batch a
| API | API reference | Download | Tutorial | Code samples | | | | | | | | **Batch Management REST** |[Azure REST API - Docs](/rest/api/batchmanagement/) |- |- |[GitHub](https://github.com/Azure-Samples/batch-dotnet-manage-batch-accounts) |
-| **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/batch/management/management-batch(deprecated)) |[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://github.com/Azure-Samples/azure-batch-samples/tree/master/CSharp) |
+| **Batch Management .NET** |[Azure SDK for .NET - Docs](/dotnet/api/overview/azure/resourcemanager.batch-readme) |[NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Batch/) | [Tutorial](batch-management-dotnet.md) |[GitHub](https://aka.ms/azuresdk-net-mgmt-samples) |
| **Batch Management Python** |[Azure SDK for Python - Docs](/samples/azure-samples/azure-samples-python-management/batch/) |[PyPI](https://pypi.org/project/azure-mgmt-batch/) |- |- | | **Batch Management JavaScript** |[Azure SDK for JavaScript - Docs](/javascript/api/overview/azure/arm-batch-readme) |[npm](https://www.npmjs.com/package/@azure/arm-batch) |- |- | | **Batch Management Java** |[Azure SDK for Java - Docs](/java/api/overview/azure/batch/management) |[Maven](https://search.maven.org/search?q=a:azure-batch) |- |- |
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md
Title: Use the Azure Compute Gallery to create a custom image pool description: Custom image pools are an efficient way to configure compute nodes to run your Batch workloads. Previously updated : 03/20/2024 Last updated : 06/25/2024 ms.devlang: csharp # ms.devlang: csharp, python
Using a Shared Image configured for your scenario can provide several advantages
- **an Azure Compute Gallery image**. To create a Shared Image, you need to have or create a managed image resource. The image should be created from snapshots of the VM's OS disk and optionally its attached data disks. > [!NOTE]
-> If the Shared Image is not in the same subscription as the Batch account, you must [register the Microsoft.Batch resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) for that subscription. The two subscriptions must be in the same Microsoft Entra tenant.
+> If the Shared Image is not in the same subscription as the Batch account, you must [register the Microsoft.Batch resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) for the subscription that uses the Shared Image. The two subscriptions must be in the same Microsoft Entra tenant.
> > The image can be in a different region as long as it has replicas in the same region as your Batch account.
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
Title: Configure managed identities in Batch pools description: Learn how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes. Previously updated : 06/18/2024 Last updated : 06/25/2024 ms.devlang: csharp
batch Nodes And Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/nodes-and-pools.md
Title: Nodes and pools in Azure Batch description: Learn about compute nodes and pools and how they are used in an Azure Batch workflow from a development standpoint. Previously updated : 06/13/2024 Last updated : 06/25/2024 # Nodes and pools in Azure Batch
-In an Azure Batch workflow, a *compute node* (or *node*) is a virtual machine that processes a portion of your application's workload. A *pool* is a collection of these nodes for your application to runs on. This article explains more about nodes and pools, along with considerations when creating and using them in an Azure Batch workflow.
+In an Azure Batch workflow, a *compute node* (or *node*) is a virtual machine that processes a portion of your application's workload. A *pool* is a collection of these nodes for your application to run on. This article explains more about nodes and pools, along with considerations when creating and using them in an Azure Batch workflow.
## Nodes
The pool can be created manually, or [automatically by the Batch service](#autop
- [Operating system and version](#operating-system-and-version) - [Configurations](#configurations) - [Virtual Machine Configuration](#virtual-machine-configuration)
- - [Cloud Services Configuration](#cloud-services-configuration)
- [Node Agent SKUs](#node-agent-skus) - [Custom images for Virtual Machine pools](#custom-images-for-virtual-machine-pools) - [Container support in Virtual Machine pools](#container-support-in-virtual-machine-pools)
When you create a Batch pool, you specify the Azure virtual machine configuratio
## Configurations
-There are two types of pool configurations available in Batch.
-
-> [!IMPORTANT]
-> While you can currently create pools using either configuration, new pools should be configured using Virtual Machine Configuration and not Cloud Services Configuration. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'CloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).
- ### Virtual Machine Configuration The **Virtual Machine Configuration** specifies that the pool is composed of Azure virtual machines. These VMs may be created from either Linux or Windows images.
The **Virtual Machine Configuration** specifies that the pool is composed of Azu
The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) is a program that runs on each node in the pool and provides the command-and-control interface between the node and the Batch service. There are different implementations of the node agent, known as SKUs, for different operating systems. When you create a pool based on the Virtual Machine Configuration, you must specify not only the size of the nodes and the source of the images used to create them, but also the **virtual machine image reference** and the Batch **node agent SKU** to be installed on the nodes. For more information about specifying these pool properties, see [Provision Linux compute nodes in Azure Batch pools](batch-linux-nodes.md). You can optionally attach one or more empty data disks to pool VMs created from Marketplace images, or include data disks in custom images used to create the VMs. When including data disks, you need to mount and format the disks from within a VM to use them.
-### Cloud Services Configuration
-
-> [!WARNING]
-> Cloud Services Configuration pools are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). Please use Virtual Machine Configuration pools instead. For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
-
-The **Cloud Services Configuration** specifies that the pool is composed of Azure Cloud Services nodes. Cloud Services provides only Windows compute nodes.
-
-Available operating systems for Cloud Services Configuration pools are listed in the [Azure Guest OS releases and SDK compatibility matrix](../cloud-services/cloud-services-guestos-update-matrix.md), and available compute node sizes are listed in [Sizes for Cloud Services](../cloud-services/cloud-services-sizes-specs.md). When you create a pool that contains Cloud Services nodes, you specify the node size and its *OS Family* (which determines which versions of .NET are installed with the OS). Cloud Services is deployed to Azure more quickly than virtual machines running Windows. If you want pools of Windows compute nodes, you may find that Cloud Services provide a performance benefit in terms of deployment time.
-
-As with worker roles within Cloud Services, you can specify an *OS Version*. We recommend that you specify `Latest (*)` for the *OS Version* so that the nodes are automatically upgraded, and there is no work required to cater to newly released versions. The primary use case for selecting a specific OS version is to ensure application compatibility, which allows backward compatibility testing to be performed before allowing the version to be updated. After validation, the *OS Version* for the pool can be updated and the new OS image can be installed. Any running tasks will be interrupted and requeued.
- ### Node Agent SKUs When you create a pool, you need to select the appropriate **nodeAgentSkuId**, depending on the OS of the base image of your VHD. You can get a mapping of available node agent SKU IDs to their OS Image references by calling the [List Supported Node Agent SKUs](/rest/api/batchservice/list-supported-node-agent-skus) operation.
batch Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-bicep.md
Get started with Azure Batch by using a Bicep file to create a Batch account, in
After completing this quickstart, you'll understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale. ## Prerequisites You must have an active Azure subscription. -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
## Review the Bicep file
batch Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-cli.md
After you complete this quickstart, you understand the [key concepts of the Batc
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Azure Cloud Shell or Azure CLI.
batch Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-portal.md
After you complete this quickstart, you understand the [key concepts of the Batc
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
>[!NOTE] >For some regions and subscription types, quota restrictions might cause Batch account or node creation to fail or not complete. In this situation, you can request a quota increase at no charge. For more information, see [Batch service quotas and limits](batch-quota-limit.md).
batch Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-template.md
Get started with Azure Batch by using an Azure Resource Manager template (ARM te
After completing this quickstart, you'll understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
If your environment meets the prerequisites and you're familiar with using ARM t
You must have an active Azure subscription. -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
## Review the template
batch Batch Cli Sample Add Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-add-application.md
keywords: batch, azure cli samples, azure cli code samples, azure cli script sam
This script demonstrates how to add an application for use with an Azure Batch pool or task. To set up an application to add to your Batch account, package your executable, together with any dependencies, into a zip file. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Create batch account and new application
az batch application set \
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-account.md
keywords: batch, azure cli samples, azure cli code samples, azure cli script sam
This script creates an Azure Batch account in Batch service mode and shows how to query or update various properties of the account. When you create a Batch account in the default Batch service mode, its compute nodes are assigned internally by the Batch service. Allocated compute nodes are subject to a separate vCPU (core) quota and the account can be authenticated either via shared key credentials or a Microsoft Entra token. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
service. Allocated compute nodes are subject to a separate vCPU (core) quota and
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Create User Subscription Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-user-subscription-account.md
keywords: batch, azure cli samples, azure cli examples, azure cli code samples
This script creates an Azure Batch account in user subscription mode. An account that allocates compute nodes into your subscription must be authenticated via a Microsoft Entra token. The compute nodes allocated count toward your subscription's vCPU (core) quota. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This script creates an Azure Batch account in user subscription mode. An account
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Manage Linux Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-linux-pool.md
keywords: linux, azure cli samples, azure cli code samples, azure cli script sam
This script demonstrates some of the commands available in the Azure CLI to create and manage a pool of Linux compute nodes in Azure Batch. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### To create a Linux pool in Azure Batch
az batch node delete \
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Manage Windows Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-windows-pool.md
keywords: windows pool, azure cli samples, azure cli code samples, azure cli scr
This script demonstrates some of the commands available in the Azure CLI to create and manage a pool of Windows compute nodes in Azure Batch. A Windows pool can be configured in two ways, with either a Cloud Services configuration or a Virtual Machine configuration. This example shows how to create a Windows pool with the Cloud Services configuration. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
manage a pool of Windows compute nodes in Azure Batch. A Windows pool can be con
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Batch Cli Sample Run Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-run-job.md
keywords: batch, batch job, monitor job, azure cli samples, azure cli code sampl
This script creates a Batch job and adds a series of tasks to the job. It also demonstrates how to monitor a job and its tasks. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Create a Batch account in Batch service mode
az batch task show \
## Clean up resources ```azurecli az group delete --name $resourceGroup
batch Tutorial Parallel Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-dotnet.md
Use Azure Batch to run large-scale parallel and high-performance computing (HPC)
In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org) open-source tool. ## Prerequisites
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
Use Azure Batch to run large-scale parallel and high-performance computing (HPC)
In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org/) open-source tool. ## Prerequisites
cdn Cdn Add To Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-add-to-web-app.md
To complete this tutorial:
- [Install Git](https://git-scm.com/) - [Install the Azure CLI](/cli/azure/install-azure-cli) ## Create the web app
cdn Cdn Azure Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-azure-diagnostic-logs.md
To use an event hub for the logs, follow these steps:
The following example shows how to enable diagnostic logs via the Azure PowerShell Cmdlets. ### Enable diagnostic logs in a storage account
cdn Cdn Caching Rules Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules-tutorial.md
In this tutorial, you learn how to:
> - Create a global caching rule. > - Create a custom caching rule. ## Prerequisites
cdn Cdn Create Endpoint How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-endpoint-how-to.md
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
4. For **Origin type**, choose one of the following origin types: - **Storage** for Azure Storage
+ - **Storage static website** for Azure Storage static websites
- **Cloud service** for Azure Cloud Services - **Web App** for Azure Web Apps - **Custom origin** for any other publicly accessible origin web server (hosted in Azure or elsewhere)
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
In this tutorial, you learn how to:
## Prerequisites Before you can complete the steps in this tutorial, create a CDN profile and at least one CDN endpoint. For more information, see [Quickstart: Create an Azure CDN profile and endpoint](cdn-create-new-endpoint.md).
cdn Cdn Manage Expiration Of Blob Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-expiration-of-blob-content.md
The preferred method for setting a blob's `Cache-Control` header is to use cachi
## Setting Cache-Control headers by using Azure PowerShell [Azure PowerShell](/powershell/azure/) is one of the quickest and most powerful ways to administer your Azure services. Use the `Get-AzStorageBlob` cmdlet to get a reference to the blob, then set the `.ICloudBlob.Properties.CacheControl` property.
cdn Cdn Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-manage-powershell.md
PowerShell provides one of the most flexible methods to manage your Azure Conten
## Prerequisites To use PowerShell to manage your Azure Content Delivery Network profiles and endpoints, you must have the Azure PowerShell module installed. To learn how to install Azure PowerShell and connect to Azure using the `Connect-AzAccount` cmdlet, see [How to install and configure Azure PowerShell](/powershell/azure/).
cdn Cdn Map Content To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-map-content-to-custom-domain.md
In this tutorial, you learn how to:
> - Add a custom domain with your content delivery network endpoint. > - Verify the custom domain. ## Prerequisites
cdn Cdn Storage Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-storage-custom-domain-https.md
Previously updated : 03/20/2024 Last updated : 06/26/2024
In the above rule, leaving Hostname, Path, Query string, and Fragment results in
![Edgio redirect rule](./media/cdn-storage-custom-domain-https/cdn-url-redirect-rule.png)
-In the above rule, *Cdn-endpoint-name* refers to the name that you configured for your content delivery network endpoint, which you can select from the dropdown list. The value for *origin-path* refers to the path within your origin storage account where your static content resides. If you're hosting all static content in a single container, replace *origin-path* with the name of that container.
+In the above rule, *Cdn-endpoint-name* refers to the name that you configured for your content delivery network endpoint. The value for *origin-path* refers to the path within your origin storage account where your static content resides. If you're hosting all static content in a single container, replace *origin-path* with the name of that container.
## Pricing and billing
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
Get started with Azure Content Delivery Network by using a Bicep file. The Bicep file deploys a profile and an endpoint. ## Prerequisites ## Review the Bicep file
cdn Create Profile Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-template.md
Get started with Azure Content Delivery Network by using an Azure Resource Manager template (ARM template). The template deploys a profile and an endpoint. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
If your environment meets the prerequisites and you're familiar with using ARM t
## Prerequisites ## Review the template
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
Use [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsett
Retention data is defined by the **-RetentionInDays** option in the command. ### Enable diagnostic logs in a storage account
chaos-studio Chaos Studio Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-bicep.md
# Use Bicep to create an experiment in Azure Chaos Studio This article includes a sample Bicep file to get started in Azure Chaos Studio, including:
chaos-studio Chaos Studio Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md
Chaos Studio has the following operations:
| Microsoft.Chaos/experiments/start/action | Start a chaos experiment. | | Microsoft.Chaos/experiments/cancel/action | Stop a chaos experiment. | | Microsoft.Chaos/experiments/executions/Read | Get the execution status for a run of a chaos experiment. |
-| Microsoft.Chaos/experiments/getExecutionDetails/action | Get the execution details (status and errors for each action) for a run of a chaos experiment. |
+| Microsoft.Chaos/experiments/executions/getExecutionDetails/action | Get the execution details (status and errors for each action) for a run of a chaos experiment. |
To assign these permissions granularly, you can [create a custom role](../role-based-access-control/custom-roles.md).
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Get started with Azure Chaos Studio by using a virtual machine (VM) shutdown service-direct experiment to make your service more resilient to that failure in real-world scenarios. ## Prerequisites-- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A Linux VM running an operating system in the [Azure Chaos Studio version compatibility](chaos-studio-versions.md) list. If you don't have a VM, [follow these steps to create one](../virtual-machines/linux/quick-create-portal.md). ## Register the Chaos Studio resource provider
chaos-studio Chaos Studio Tutorial Aad Outage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aad-outage-portal.md
You can use a chaos experiment to verify that your application is resilient to f
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A network security group. ## Enable Chaos Studio on your network security group
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
You can use these same steps to set up and run an experiment for any agent-based
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A virtual machine running an operating system in the [version compatibility](chaos-studio-versions.md) list. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md). - A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). - A user-assigned managed identity. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
chaos-studio Chaos Studio Tutorial Agent Based Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md
You can use these same steps to set up and run an experiment for any agent-based
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A Linux VM running an operating system in the [version compatibility](chaos-studio-versions.md) list. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md). - A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). - A user-assigned managed identity *that was assigned to the target VM or virtual machine scale set*. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source cha
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An AKS cluster with Linux node pools. If you don't have an AKS cluster, see the AKS quickstart that uses the [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or the [Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md). ## Limitations
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source cha
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An AKS cluster with a Linux node pool. If you don't have an AKS cluster, see the AKS quickstart that uses the [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or the [Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md). ## Limitations
chaos-studio Chaos Studio Tutorial Availability Zone Down Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-availability-zone-down-portal.md
You can use a chaos experiment to verify that your application is resilient to f
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A Virtual Machine Scale Sets instance. - An Autoscale Settings instance.
chaos-studio Chaos Studio Tutorial Dynamic Target Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-cli.md
You can use these same steps to set up and run an experiment for any fault that
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An Azure Virtual Machine Scale Sets instance. ## Open Azure Cloud Shell
chaos-studio Chaos Studio Tutorial Dynamic Target Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-portal.md
You can use these same steps to set up and run an experiment for any fault that
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An Azure Virtual Machine Scale Sets instance. ## Enable Chaos Studio on your virtual machine scale sets
chaos-studio Chaos Studio Tutorial Service Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-cli.md
You can use these same steps to set up and run an experiment for any service-dir
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, you can [create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md). - At least one read and one write region setup for your Azure Cosmos DB account.
chaos-studio Chaos Studio Tutorial Service Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-portal.md
You can use these same steps to set up and run an experiment for any service-dir
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, follow these steps to [create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md). - At least one read and one write region setup for your Azure Cosmos DB account.
cloud-services Cloud Services Allocation Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-allocation-failures.md
When you deploy instances to a Cloud Service or add new web or worker role instances, Microsoft Azure allocates compute resources. You may occasionally receive errors when performing these operations even before you reach the Azure subscription limits. This article explains the causes of some of the common allocation failures and suggests possible remediation. The information may also be useful when you plan the deployment of your services. ### Background ΓÇô How allocation works
cloud-services Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md
This article discusses some of the common causes of deployment problems and provides troubleshooting tips to help you resolve these problems. An indication that a problem exists with an application is when the role instance fails to start, or it cycles between the initializing, busy, and stopping states. ## Missing runtime dependencies If a role in your application relies on any assembly that is not part of the .NET Framework or the Azure managed library, you must explicitly include that assembly in the application package. Keep in mind that other Microsoft frameworks are not available on Azure by default. If your role relies on such a framework, you must add those assemblies to the application package.
cloud-services Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md
The default temporary directory of a cloud service worker or web role has a maximum size of 100 MB, which may become full at some point. This article describes how to avoid running out of space for the temporary directory. ## Why do I run out of space? The standard Windows environment variables TEMP and TMP are available to code that is running in your application. Both TEMP and TMP point to a single directory that has a maximum size of 100 MB. Any data that is stored in this directory is not persisted across the lifecycle of the cloud service; if the role instances in a cloud service are recycled, the directory is cleaned.
cloud-services Cloud Services Troubleshoot Deployment Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md
You can find the **Properties** pane as follows:
> > ## Problem: I cannot access my website, but my deployment is started and all role instances are ready The website URL link shown in the portal does not include the port. The default port for websites is 80. If your application is configured to run in a different port, you must add the correct port number to the URL when accessing the website.
cloud-services Cloud Services Troubleshoot Roles That Fail Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-troubleshoot-roles-that-fail-start.md
Here are some common problems and solutions related to Azure Cloud Services roles that fail to start. ## Missing DLLs or dependencies Unresponsive roles and roles that are cycling between **Initializing**, **Busy**, and **Stopping** states can be caused by missing DLLs or assemblies.
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
The following list presents the set of features that are currently available in
| | Transfer a call to a user | ✔️ | ✔️ | ✔️ | ✔️ | | | Be transferred to a user or call | ✔️ | ✔️ | ✔️ | ✔️ | | | Transfer a call to a call | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Transfer a call to Voicemail | ❌ | ❌ | ❌ | ❌ |
+| | Transfer a call to Voicemail | ✔️ | ✔️ | ✔️ | ✔️ |
| | Be transferred to voicemail | ✔️ | ✔️ | ✔️ | ✔️ | | | Merge ongoing calls | ❌ | ❌ | ❌ | ❌ | | | Does start a call and add user operations honor shared line configuration | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Transfer Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/transfer-calls.md
# Transfer calls
-During an active call, you may want to transfer the call to another person or number. Let's learn how.
+During an active call, you may want to transfer the call to another person, number, or to voicemail. Let's learn how.
## Prerequisites
confidential-computing Quick Create Confidential Vm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli.md
For this step you need to be a Global Admin or you need to have the User Access
``` 2. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault. ```azurecli-interactive
- az keyvault create -n keyVaultName -g myResourceGroup --enabled-for-disk-encryption true --sku premium --enable-purge-protection true
+ az keyvault create -n keyVaultName -g myResourceGroup --enabled-for-disk-encryption true --sku premium --enable-purge-protection true --enable-rbac-authorization false
``` 3. Give `Confidential VM Orchestrator` permissions to `get` and `release` the key vault. ```Powershell
confidential-computing Quick Create Confidential Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal.md
To create a confidential VM in the Azure portal using an Azure Marketplace image
q. Go to the disk encryption set resource in the Azure portal.
- r. Select the pink banner to grant permissions to Azure Key Vault.
+ r. When you see a blue info banner, please follow the instructions provided to grant access. On encountering a pink banner, simply select it to grant the necessary permissions to Azure Key Vault.
> [!IMPORTANT] > You must perform this step to successfully create the confidential VM.
confidential-ledger Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-cli.md
Azure confidential ledger is a cloud service that provides a high integrity stor
For more information on Azure confidential ledger and examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
confidential-ledger Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-portal.md
Azure confidential ledger is a cloud service that provides a high integrity store for sensitive data logs and records that require data to be kept intact. For more information on Azure confidential ledger and examples of what can be stored in a confidential ledger, see [About Microsoft Azure confidential ledger](overview.md). In this quickstart, you create a confidential ledger with the [Azure portal](https://portal.azure.com).
confidential-ledger Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-powershell.md
Azure confidential ledger is a cloud service that provides a high integrity stor
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. In this quickstart, you create a confidential ledger with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Get started with the Microsoft Azure confidential ledger client library for Pyth
Microsoft Azure confidential ledger is a new and highly secure service for managing sensitive data records. Based on a permissioned blockchain model, Azure confidential ledger offers unique data integrity advantages, such as immutability (making the ledger append-only) and tamper proofing (to ensure all records are kept intact). [API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-confidentialledger/latest/azure.confidentialledger.html) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger) | [Package (Python Package Index) Management Library](https://pypi.org/project/azure-mgmt-confidentialledger/)| [Package (Python Package Index) Client Library](https://pypi.org/project/azure-confidentialledger/)
confidential-ledger Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-template.md
Last updated 01/30/2024
[Microsoft Azure confidential ledger](overview.md) is a new and highly secure service for managing sensitive data records. This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a new ledger. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
connectors Connectors Create Api Azure Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azure-event-hubs.md
Last updated 01/04/2024
# Connect to an event hub from workflows in Azure Logic Apps The Azure Event Hubs connector helps you connect your logic app workflows to event hubs in Azure. You can then have your workflows monitor and manage events that are sent to an event hub. For example, your workflow can check, send, and receive events from your event hub. This article provides a get started guide to using the Azure Event Hubs connector by showing how to connect to an event hub and add an Event Hubs trigger or action to your workflow.
connectors Connectors Create Api Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-container-instances.md
Last updated 01/04/2024
# Deploy and manage Azure Container Instances by using Azure Logic Apps With Azure Logic Apps and the Azure Container Instance connector, you can set up automated tasks and workflows that deploy and manage [container groups](../container-instances/container-instances-container-groups.md). The Container Instance connector supports the following actions:
connectors Connectors Create Api Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-db2.md
Last updated 01/04/2024
# Access and manage IBM DB2 resources by using Azure Logic Apps With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [IBM DB2 connector](/connectors/db2/), you can create automated
connectors Connectors Create Api Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-informix.md
Last updated 01/04/2024
# Manage IBM Informix database resources by using Azure Logic Apps With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Informix connector](/connectors/informix/), you can create automated tasks and workflows that manage resources in an IBM Informix database. This connector includes a Microsoft client that communicates with remote Informix server computers across a TCP/IP network, including cloud-based databases such as IBM Informix for Windows running in Azure virtualization and on-premises databases when you use the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). You can connect to these Informix platforms and versions if they are configured to support Distributed Relational Database Architecture (DRDA) client connections:
connectors Connectors Create Api Smtp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-smtp.md
Last updated 01/04/2024
# Connect to your SMTP account from Azure Logic Apps With Azure Logic Apps and the Simple Mail Transfer Protocol (SMTP) connector, you can create automated tasks and workflows that send email from your SMTP account.
connectors Connectors Integrate Security Operations Create Api Microsoft Graph Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md
Last updated 01/04/2024
# Improve threat protection by integrating security operations with Microsoft Graph Security & Azure Logic Apps With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Microsoft Graph Security](/graph/security-concept-overview) connector, you can improve how your app detects, protects, and responds to threats by creating automated workflows for integrating Microsoft security products, services, and partners. For example, you can create [Microsoft Defender for Cloud playbooks](../security-center/workflow-automation.yml) that monitor and manage Microsoft Graph Security entities, such as alerts. Here are some scenarios that are supported by the Microsoft Graph Security connector:
connectors Connectors Native Delay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-delay.md
Last updated 01/04/2024
# Delay running the next action in Azure Logic Apps To have your logic app wait an amount of time before running the next action, you can add the built-in **Delay** action before an action in your logic app's workflow. Or, you can add the built-in **Delay until** action to wait until a specific date and time before running the next action. For more information about the built-in Schedule actions and triggers, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
connectors Connectors Native Sliding Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-sliding-window.md
Last updated 01/04/2024
# Schedule and run tasks for contiguous data by using the Sliding Window trigger in Azure Logic Apps To regularly run tasks, processes, or jobs that must handle data in contiguous chunks, you can start your logic app workflow with the **Sliding Window** trigger. You can set a date and time as well as a time zone for starting the workflow and a recurrence for repeating that workflow. If recurrences are missed for any reason, for example, due to disruptions or disabled workflows, this trigger processes those missed recurrences. For example, when synchronizing data between your database and backup storage, use the Sliding Window trigger so that the data gets synchronized without incurring gaps. For more information about the built-in Schedule triggers and actions, see [Schedule and run recurring automated, tasks, and workflows with Azure Logic Apps](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md).
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
User-assigned identities are ideal for workloads that:
## Limitations -- Managed identities in scale rules isn't supported. You need to include connection strings or keys in the `secretRef` of the scaling rule.-- [Init containers](containers.md#init-containers) can't access managed identities.
+[Init containers](containers.md#init-containers) can't access managed identities in [consumption-only environments](environment.md#types) and [dedicated workload profile environments](environment.md#types)
## Configure managed identities
To get a token for a resource, make an HTTP `GET` request to the endpoint, inclu
+## Use managed identity for scale rules
+
+Starting in API version `2024-02-02-preview`, you can use managed identities in your scale rules to authenticate with Azure services that support managed identities. To use a managed identity in your scale rule, use the `identity` property instead of the `auth` property in your scale rule. Acceptable values for the `identity` property are either the Azure resource ID of a user-assigned identity, or `system` to use a system-assigned identity
+
+The following example shows how to use a managed identities with an Azure Queue Storage scale rule. The queue storage account uses the `accountName` property to identify the storage account, while the `identity` property specifies which managed identity to use. You do not need to use the `auth` property.
+
+```json
+"scale": {
+ "minReplicas": 1,
+ "maxReplicas": 10,
+ "rules": [{
+ "name": "myQueueRule",
+ "azureQueue": {
+ "accountName": "mystorageaccount",
+ "queueName": "myqueue",
+ "queueLength": 2,
+ "identity": "<IDENTITY1_RESOURCE_ID>"
+ }
+ }]
+}
+```
+
+## Control managed identity availability
+
+Container Apps allow you to specify [init containers](containers.md#init-containers) and main containers. By default, both main and init containers in a consumption workload profile environment can use managed identity to access other Azure services. In consumption-only environments and dedicated workload profile environments, only main containers can use managed identity. Managed identity access tokens are available for every managed identity configured on the container app. However, in some situations only the init container or the main container require access tokens for a managed identity. Other times, you may use a managed identity only to access your Azure Container Registry to pull the container image, and your application itself doesn't need to have access to your Azure Container Registry.
+
+Starting in API version `2024-02-02-preview`, you can control which managed identities are available to your container app during the init and main phases to follow the security principle of least privilege. The following options are available:
+
+- `Init`: available only to init containers. Use this when you want to perform some intilization work that requires a managed identity, but you no longer need the managed identity in the main container. This option is currently only supported in [workload profile consumption environments](environment.md#types)
+- `Main`: available only to main containers. Use this if your init container does not need managed identity.
+- `All`: available to all containers. This is the default setting.
+- `None`: not available to any containers. Use this when you have a managed identity that is only used for ACR image pull, scale rules, or Key Vault secrets and does not need to be available to the code running in your containers.
+
+The following example shows how to configure a container app on a workload profile consumption environment that:
+
+- Restricts the container app's system-assigned identity to main containers only.
+- Restricts a specific user-assigned identity to init containers only.
+- Uses a specific user-assigned identity for Azure Container Registry image pull without allowing the code in the containers to use that managed identity to access the registry. In this example, the containers themselves don't need to access the registry.
+
+This approach limits the resources that can be accessed if a malicious actor were to gain unauthorized access to the containers.
+
+```json
+{
+ "location": "eastus2",
+ "identity":{
+ "type": "SystemAssigned, UserAssigned",
+ "userAssignedIdentities": {
+ "<IDENTITY1_RESOURCE_ID>":{},
+ "<ACR_IMAGEPULL_IDENTITY_RESOURCE_ID>":{}
+ }
+ },
+ "properties": {
+ "workloadProfileName":"Consumption",
+ "environmentId": "<CONTAINER_APPS_ENVIRONMENT_ID>",
+ "configuration": {
+ "registries": [
+ {
+ "server": "myregistry.azurecr.io",
+ "identity": "ACR_IMAGEPULL_IDENTITY_RESOURCE_ID"
+ }],
+ "identitySettings":[
+ {
+ "identity": "ACR_IMAGEPULL_IDENTITY_RESOURCE_ID",
+ "lifecycle": "none"
+ },
+ {
+ "identity": "<IDENTITY1_RESOURCE_ID>",
+ "lifecycle": "init"
+ },
+ {
+ "identity": "system",
+ "lifecycle": "main"
+ }]
+ },
+ "template": {
+ "containers":[
+ {
+ "image":"myregistry.azurecr.io/main:1.0",
+ "name":"app-main"
+ }
+ ],
+ "initContainers":[
+ {
+ "image":"myregistry.azurecr.io/init:1.0",
+ "name":"app-init",
+ }
+ ]
+ }
+ }
+}
+```
+ ## View managed identities You can show the system-assigned and user-assigned managed identities using the following Azure CLI command. The output shows the managed identity type, tenant IDs and principal IDs of all managed identities assigned to your container app.
container-apps Sessions Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-custom-container.md
Previously updated : 05/06/2024 Last updated : 06/26/2024
In addition to the built-in code interpreter that Azure Container Apps dynamic s
## Uses for custom container sessions
-Custom containers allow you to build solutions tailored to your needs. They enable you to execute code or applications in environments that are fast and ephemeral and offer secure, sandboxed spaces with Hyper-V. Additionally, they can be configured with optional network isolation. Some examples include:
+Custom containers allow you to build solutions tailored to your needs. They enable you to execute code or run applications in environments that are fast and ephemeral and offer secure, sandboxed spaces with Hyper-V. Additionally, they can be configured with optional network isolation. Some examples include:
* **Code interpreters**: When you need to execute untrusted code in secure sandboxes by a language not supported in the built-in interpreter, or you need full control over the code interpreter environment.
-* **Isolated execution**: When you need to run applications in hostile, multitenant scenarios where each tenant or user has their own sandboxed environment. These environments are isolated from each other and from the host application. Some examples include applications that run user-provided code, code that grants end user access to a cloud-based shell, and development environments.
+* **Isolated execution**: When you need to run applications in hostile, multitenant scenarios where each tenant or user has their own sandboxed environment. These environments are isolated from each other and from the host application. Some examples include applications that run user-provided code, code that grants end user access to a cloud-based shell, AI agents, and development environments.
## Using custom container sessions
When your application requests a session, an instance is instantly allocated fro
To create a custom container session pool, you need to provide a container image and pool configuration settings.
-You communicate with each session using HTTP requests. The custom container must expose an HTTP server on a port that you specify to respond to these requests.
+You invoke or communicate with each session using HTTP requests. The custom container must expose an HTTP server on a port that you specify to respond to these requests.
# [Azure CLI](#tab/azure-cli)
Your application interacts with a session using the session pool's management AP
A pool management endpoint for custom container sessions follows this format: `https://<SESSION_POOL>.<ENVIRONMENT_ID>.<REGION>.azurecontainerapps.io`. To retrieve the session pool's management endpoint, use the `az containerapp sessionpool show` command:- ```bash az containerapp sessionpool show \ --name <SESSION_POOL_NAME> \
az containerapp sessionpool show \
All requests to the pool management endpoint must include an `Authorization` header with a bearer token. To learn how to authenticate with the pool management API, see [Authentication](sessions.md#authentication).
-Every request to the API requires query string parameter of `identifier` with value of the session ID. The session ID is a unique identifier for the session that allows you to interact with specific sessions. To learn more about session identifiers, see [Session identifiers](sessions.md#session-identifiers).
+Each API request must also include the query string parameter `identifier` with the session ID. This unique session ID enables your application to interact with specific sessions. To learn more about session identifiers, see [Session identifiers](sessions.md#session-identifiers).
+
+> [!IMPORTANT]
+> The session identifier is sensitive information which requires a secure process as you create and manage its value. To protect this value, your application must ensure each user or tenant only has access to their own sessions.
+> Failure to secure access to sessions may result in misuse or unauthorized access to data stored in your users' sessions. For more information, see [Session identifiers](sessions.md#session-identifiers)
+
+#### Forwarding requests to the session's container:
+
+Anything in the path following the base pool management endpoint is forwarded to the session's container.
+
+For example, if you make a call to `<POOL_MANAGEMENT_ENDPOINT>/api/uploadfile`, the request is routed to the session's container at `0.0.0.0:<TARGET_PORT>/api/uploadfile`.
+
+#### Continuous session interaction:
+
+You can continue making requests to the same session. If there are no requests to the session for longer than the cooldown period, the session is automatically deleted.
+
+#### Sample request
The following example shows a request to a custom container session by a user ID. Before you send the request, replace the placeholders between the `<>` brackets with values specific to your request. ```http
-POST https://<SESSION_POOL_NAME>.<ENVIRONMENT_ID>.<REGION>.azurecontainerapps.io/api/execute-command?identifier=<USER_ID>
+POST https://<SESSION_POOL_NAME>.<ENVIRONMENT_ID>.<REGION>.azurecontainerapps.io/<API_PATH_EXPOSED_BY_CONTAINER>?identifier=<USER_ID>
Authorization: Bearer <TOKEN>- { "command": "echo 'Hello, world!'" }
Authorization: Bearer <TOKEN>
This request is forwarded to the custom container session with the identifier for the user's ID. If the session isn't already running, Azure Container Apps allocates a session from the pool before forwarding the request.
-In the example, the session's container receives the request at `http://0.0.0.0:<INGRESS_PORT>/api/execute-command`.
+In the example, the session's container receives the request at `http://0.0.0.0:<INGRESS_PORT>/<API_PATH_EXPOSED_BY_CONTAINER>`.
## Next steps
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
What you will learn:
> * Create a PostgreSQL database in Azure. > * Connect to a PostgreSQL Database with managed identity using Service Connector. ## 1. Prerequisites
container-instances Container Instances Egress Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-egress-ip-address.md
In this article, you use the Azure CLI to create the resources for this scenario
You then validate ingress and egress from example container groups through the firewall. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] > [!NOTE] > To download the complete script, go to [full script](https://github.com/Azure-Samples/azure-cli-samples/blob/master/container-instances/egress-ip-address.sh).
container-instances Container Instances Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-environment-variables.md
For example, if you run the Microsoft [aci-wordcount][aci-wordcount] container i
If you need to pass secrets as environment variables, Azure Container Instances supports [secure values](#secure-values) for both Windows and Linux containers. ## Azure CLI example
container-instances Container Instances Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-log-analytics.md
To send container group log and event data to Azure Monitor logs, specify an exi
The following sections describe how to create a logging-enabled container group and how to query logs. You can also [update a container group](container-instances-update.md) with a workspace ID and workspace key to enable logging. ## Prerequisites
container-instances Container Instances Multi Container Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-multi-container-group.md
A Resource Manager template can be readily adapted for scenarios when you need t
> [!NOTE] > Multi-container groups are currently restricted to Linux containers. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-instances Container Instances Multi Container Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-multi-container-yaml.md
In this tutorial, you follow steps to run a simple two-container sidecar configu
> [!NOTE] > Multi-container groups are currently restricted to Linux containers. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-instances Container Instances Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-nat-gateway.md
You then validate egress from example container groups through the NAT gateway.
> [!NOTE] > The ACI service recommends integrating with a NAT gateway for containerized workloads that have static egress but not static ingress requirements. For ACI architecture that supports both static ingress and egress, please see the following tutorial: [Use Azure Firewall for ingress and egress](container-instances-egress-ip-address.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] > [!NOTE] > To download the complete script, go to [full script](https://github.com/Azure-Samples/azure-cli-samples/blob/master/container-instances/nat-gateway.sh).
container-instances Container Instances Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-bicep.md
Use Azure Container Instances to run serverless Docker containers in Azure with simplicity and speed. Deploy an application to a container instance on-demand when you don't need a full container orchestration platform like Azure Kubernetes Service. In this quickstart, you use a Bicep file to deploy an isolated Docker container and make its web application available with a public IP address. ## Prerequisites
container-instances Container Instances Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-powershell.md
In this quickstart, you use Azure PowerShell to deploy an isolated Windows conta
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
container-instances Container Instances Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart-template.md
Use Azure Container Instances to run serverless Docker containers in Azure with simplicity and speed. Deploy an application to a container instance on-demand when you don't need a full container orchestration platform like Azure Kubernetes Service. In this quickstart, you use an Azure Resource Manager template (ARM template) to deploy an isolated Docker container and make its web application available with a public IP address. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
container-instances Container Instances Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart.md
In this quickstart, you use the Azure CLI to deploy an isolated Docker container
![View an app deployed to Azure Container Instances in browser][aci-app-browser] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-instances Container Instances Volume Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-volume-azure-files.md
By default, Azure Container Instances are stateless. If the container is restart
## Limitations
+* Azure Storage doesn't support SMB mounting of file share using managed identity
* You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the [overview](container-instances-overview.md#linux-and-windows-containers). * Azure file share volume mount requires the Linux container run as *root* . * Azure File share volume mounts are limited to CIFS support.
container-registry Container Registry Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-cache.md
Artifact cache currently supports the following upstream registries:
| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | AWS Elastic Container Registry (ECR) Public Gallery | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | GitHub Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal |
-| Nvidia | Supports both authenticated and unauthenticated pulls. | Azure CLI |
| Quay | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | | registry.k8s.io | Supports both authenticated and unauthenticated pulls. | Azure CLI | | Google Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI |
container-registry Container Registry Event Grid Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-event-grid-quickstart.md
After you complete the steps in this article, events sent from your container re
![Web browser rendering the sample web application with three received events][sample-app-01] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-registry Container Registry Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-bicep.md
This quickstart shows how to create an Azure Container Registry instance by using a Bicep file. ## Prerequisites
container-registry Container Registry Get Started Geo Replication Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-geo-replication-template.md
This quickstart shows how to create an Azure Container Registry instance by using an Azure Resource Manager template (ARM template). The template sets up a [geo-replicated](container-registry-geo-replication.md) registry, which automatically synchronizes registry content across more than one Azure region. Geo-replication enables network-close access to images from regional deployments, while providing a single management experience. It's a feature of the [Premium](container-registry-skus.md) registry service tier. The registry with replications does not support the ARM/Bicep template Complete mode deployments.
container-registry Container Registry Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-get-started-powershell.md
Azure Container Registry is a private registry service for building, storing, an
## Prerequisites This quickstart requires Azure PowerShell module. Run `Get-Module -ListAvailable Az` to determine your installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
container-registry Container Registry Image Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-lock.md
az acr repository update \
--delete-enabled true --write-enabled true ```
-To restore the default behavior of the *myrepo* repository and all images so that they can be deleted and updated, run the following command:
+To restore the default behavior of the *myrepo* repository, enabling individual images to be deleted and updated, run the following command:
```azurecli az acr repository update \
az acr repository update \
--delete-enabled true --write-enabled true ```
+However, if there is a lock on the manifest, you need to run an additional command to unlock the manifest.
+
+```azurecli
+az acr repository update \
+ --name myregistry --image $repo@$digest \
+ --delete-enabled true --write-enabled true
+```
+ ## Next steps In this article, you learned about using the [az acr repository update][az-acr-repository-update] command to prevent deletion or updating of image versions in a repository. To set additional attributes, see the [az acr repository update][az-acr-repository-update] command reference.
container-registry Container Registry Quickstart Task Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md
In this quickstart, you use [Azure Container Registry Tasks][container-registry-
After this quickstart, explore more advanced features of ACR Tasks using the [tutorials](container-registry-tutorial-quick-task.md). ACR Tasks can automate image builds based on code commits or base image updates, or test multiple containers, in parallel, among other scenarios. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
container-registry Troubleshoot Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-cache.md
Artifact cache currently supports the following upstream registries:
| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | AWS Elastic Container Registry (ECR) Public Gallery | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | GitHub Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal |
-| Nvidia | Supports both authenticated and unauthenticated pulls. | Azure CLI |
| Quay | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal | | registry.k8s.io | Supports both authenticated and unauthenticated pulls. | Azure CLI | | Google Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI |
cosmos-db Ai Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-agents.md
+
+ Title: AI agents
+description: AI agent key concepts and implementation of AI agent memory system.
+++++ Last updated : 06/26/2024++
+# AI agents
+
+AI agents are designed to perform specific tasks, answer questions, and automate processes for users. These agents vary widely in complexity, ranging from simple chatbots, to copilots, to advanced AI assistants in the form of digital or robotic systems that can execute complex workflows autonomously. This article provides conceptual overviews and detailed implementation samples on AI agents.
+
+## What are AI Agents?
+
+Unlike standalone large language models (LLMs) or rule-based software/hardware systems, AI agents possess the follow common features:
+
+- [Planning](#reasoning-and-planning). AI agents can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.
+- [Tool usage](#frameworks). Advanced AI agents can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling.
+- [Perception](#frameworks). AI agents can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.
+- [Memory](#agent-memory-system). AI agents possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
+
+> [!NOTE]
+> The usage of the term "memory" in the context of AI agents should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
+
+### Copilots
+
+Copilots are a type of AI agent designed to work alongside users rather than operate independently. Unlike fully automated agents, copilots provide suggestions and recommendations to assist users in completing tasks. For instance, when a user is writing an email, a copilot might suggest phrases, sentences, or paragraphs. The user might also ask the copilot to find relevant information in other emails or files to support the suggestion (see [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation)). The user can accept, reject, or edit the suggested passages.
+
+### Autonomous agents
+
+Autonomous agents can operate more independently. When you set up autonomous agents to assist with email composition, you could enable them to perform the following tasks:
+
+- Consult existing emails, chats, files, and other internal and public information that are related to the subject matter
+- Perform qualitative or quantitative analysis on the collected information, and draw conclusions that are relevant to the email
+- Write the complete email based on the conclusions and incorporate supporting evidence
+- Attach relevant files to the email
+- Review the email to ensure that all the incorporated information is factually accurate, and that the assertions are valid
+- Select the appropriate recipients for "To," "Cc," and/or "Bcc" and look up their email addresses
+- Schedule an appropriate time to send the email
+- Perform follow-ups if responses are expected but not received
+
+You may configure the agents to perform each of the above steps with or without human approval.
+
+### Multi-agent systems
+
+Currently, the prevailing strategy for achieving performant autonomous agents is through multi-agent systems. In multi-agent systems, multiple autonomous agents, whether in digital or robotic form, interact or work together to achieve individual or collective goals. Agents in the system can operate independently and possess their own knowledge or information. Each agent may also have the capability to perceive its environment, make decisions, and execute actions based on its objectives.
+
+Key characteristics of multi-agent systems:
+
+- Autonomous: Each agent functions independently, making its own decisions without direct human intervention or control by other agents.
+- Interactive: Agents communicate and collaborate with each other to share information, negotiate, and coordinate their actions. This interaction can occur through various protocols and communication channels.
+- Goal-oriented: Agents in a multi-agent system are designed to achieve specific goals, which can be aligned with individual objectives or a common objective shared among the agents.
+- Distributed: Multi-agent systems operate in a distributed manner, with no single point of control. This distribution enhances the system's robustness, scalability, and resource efficiency.
+
+A multi-agent system provides the following advantages over a copilot or a single instance of LLM inference:
+
+- Dynamic reasoning: Compared to chain-of-thought or tree-of-thought prompting, multi-agent systems allow for dynamic navigation through various reasoning paths.
+- Sophisticated abilities: Multi-agent systems can handle complex or large-scale problems by conducting thorough decision-making processes and distributing tasks among multiple agents.
+- Enhanced memory: Multi-agent systems with memory can overcome large language models' context windows, enabling better understanding and information retention.
+
+## Implement AI agents
+
+### Reasoning and planning
+
+Complex reasoning and planning are the hallmark of advanced autonomous agents. Popular autonomous agent frameworks incorporate one or more of the following methodologies for reasoning and planning:
+
+[Self-ask](https://arxiv.org/abs/2210.03350)
+Improves on chain of thought by having the model explicitly asking itself (and answering) follow-up questions before answering the initial question.
+
+[Reason and Act (ReAct)](https://arxiv.org/abs/2210.03629)
+Use LLMs to generate both reasoning traces and task-specific actions in an interleaved manner. Reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information.
+
+[Plan and Solve](https://arxiv.org/abs/2305.04091)
+Devise a plan to divide the entire task into smaller subtasks, and then carry out the subtasks according to the plan. This mitigates the calculation errors, missing-step errors, and semantic misunderstanding errors that are often present in zero-shot chain-of-thought (CoT) prompting.
+
+[Reflection/Self-critique](https://arxiv.org/abs/2303.11366)
+Reflexion agents verbally reflect on task feedback signals, then maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials.
+
+### Frameworks
+
+Various frameworks and tools can facilitate the development and deployment of AI agents.
+
+For tool usage and perception that do not require sophisticated planning and memory, some popular LLM orchestrator frameworks are LangChain, LlamaIndex, Prompt Flow, and Semantic Kernel.
+
+For advanced and autonomous planning and execution workflows, [AutoGen](https://microsoft.github.io/autogen/) propelled the multi-agent wave that began in late 2022. OpenAI's [Assistants API](https://platform.openai.com/docs/assistants/overview) allow their users to create agents natively within the GPT ecosystem. [LangChain Agents](https://python.langchain.com/v0.1/docs/modules/agents/) and [LlamaIndex Agents](https://docs.llamaindex.ai/en/stable/use_cases/agents/) also emerged around the same time.
+
+> [!TIP]
+> See the implementation sample section at the end of this article for tutorial on building a simple multi-agent system using one of the popular frameworks and a unified agent memory system.
+
+### Agent memory system
+
+The prevalent practice for experimenting with AI-enhanced applications in 2022 through 2024 has been using standalone database management systems for various data workflows or types. For example, an in-memory database for caching, a relational database for operational data (including tracing/activity logs and LLM conversation history), and a [pure vector database](vector-database.md#integrated-vector-database-vs-pure-vector-database) for embedding management.
+
+However, this practice of using a complex web of standalone databases can hurt AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agents is a significant challenge in and of itself. Moreover, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems:
+
+**In-memory databases** are excellent for speed but may struggle with the large-scale data persistence that AI agents require.
+
+**Relational databases** are not ideal for the varied modalities and fluid schemas of data handled by agents. Moreover, relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding.
+
+**Pure vector databases** tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer
+- no guarantee on reads & writes
+- limited ingestion throughput
+- low availability (below 99.9%, or annualized outage of almost 9 hours or more)
+- one consistency level (eventual)
+- resource-intensive in-memory vector index
+- limited options for multitenancy
+- limited security
+
+The next section dives deeper into what makes a robust AI agent memory system.
+
+## Memory can make or break AI agents
+
+Just as efficient database management systems are critical to software applications' performances, it is critical to provide LLM-powered agents with relevant and useful information to guide their inference. Robust memory systems enable organizing and storing different kinds of information that the agents can retrieve at inference time.
+
+Currently, LLM-powered applications often use [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation) that uses basic semantic search or vector search to retrieve passages or documents. [Vector search](vector-database.md#vector-search) can be useful for finding general information, but it may not capture the specific context, structure, or relationships that are relevant for a particular task or domain.
+
+For example, if the task is to write code, vector search may not be able to retrieve the syntax tree, file system layout, code summaries, or API signatures that are important for generating coherent and correct code. Similarly, if the task is to work with tabular data, vector search may not be able to retrieve the schema, the foreign keys, the stored procedures, or the reports that are useful for querying or analyzing the data.
+
+Weaving together [a web of standalone in-memory, relational, and vector databases](#agent-memory-system) is not an optimal solution for the varied data types, either. This approach may work for prototypical agent systems; however, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
+
+Therefore, a robust memory system should have the following characteristics:
+
+#### Multi-modal (Part I)
+
+AI agent memory systems should provide different collections that store metadata, relationships, entities, summaries, or other types of information that can be useful for different tasks and domains. These collections can be based on the structure and format of the data, such as documents, tables, or code, or they can be based on the content and meaning of the data, such as concepts, associations, or procedural steps.
+
+#### Operational
+
+Memory systems should provide different memory banks that store information that is relevant for the interaction with the user and the environment. Such information may include chat history, user preferences, sensory data, decisions made, facts learned, or other operational data that are updated with high frequency and at high volumes. These memory banks can help the agents remember short-term and long-term information, avoid repeating or contradicting themselves, and maintain task coherence. These requirements must hold true even if the agents perform a multitude of unrelated tasks in succession. In advanced cases, agents may also wargame numerous branch plans that diverge or converge at different points.
+
+#### Sharable but also separable
+
+At the macro level, memory systems should enable multiple AI agents to collaborate on a problem or process different aspects of the problem by providing shared memory that is accessible to all the agents. Shared memory can facilitate the exchange of information and the coordination of actions among the agents. At the same time, the memory system must allow agents to preserve their own persona and characteristics, such as their unique collections of prompts and memories.
+
+#### Multi-modal (Part II)
+
+Not only are memory systems critical to AI agents; they are also important for the humans who develop, maintain, and use these agents. For example, humans may need to supervise agents' planning and execution workflows in near real-time. While supervising, humans may interject with guidance or make in-line edits of agents' dialogues or monologues. Humans may also need to audit the reasoning and actions of agents to verify the validity of the final output. Human-agent interactions are likely in natural or programming languages, while agents "think," "learn," and "remember" through embeddings. This data modal difference poses another requirement on memory systems' consistency across data modalities.
+
+## Infastructure for a robust memory system
+
+The above characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together [a plethora of disparate in-memory, relational, and vector databases](#agent-memory-system) may work for early-stage AI-enabled applications; however, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
+
+In place of all the standalone databases, Azure Cosmos DB can serve as a unified solution for AI agent memory systems. Its robustness successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it is the world's first globally distributed [NoSQL](distributed-nosql.md), [relational](distributed-relational.md), and [vector database](vector-database.md) service that offers a serverless mode. AI agents built on top of Azure Cosmos DB enjoy speed, scale, and simplicity.
+
+#### Speed
+
+Azure Cosmos DB provides single-digit millisecond latency, making it highly suitable for processes requiring rapid data access and management, including caching (traditional and semantic), transactions, and operational workloads. This low latency is crucial for AI agents that need to perform complex reasoning, make real-time decisions, and provide immediate responses. Moreover, its [use of state-of-the-art DiskANN algorithm](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) provides accurate and fast vector search with 95% less memory consumption.
+
+#### Scale
+
+Engineered for global distribution and horizontal scalability, and offering support for multi-region I/O and multitenancy, this service ensures that memory systems can expand seamlessly and keep up with rapidly growing agents and associated data. Its SLA-backed 99.999% availability guarantee (less than 5 minutes of downtime per year, contrasting 9 hours or more for pure vector database services) provides a solid foundation for mission-critical workloads. At the same time, its various service models like [Reserved Capacity](reserved-capacity.md) or Serverless drastically lower financial costs.
+
+#### Simplicity
+
+This service simplifies data management and architecture by integrating multiple database functionalities into a single, cohesive platform.
+
+Its integrated vector database capabilities can store, index, and query embeddings alongside the corresponding data in natural or programming languages, enabling greater data consistency, scale, and performance.
+
+Its flexibility easily supports the varied modalities and fluid schemas of the metadata, relationships, entities, summaries, chat history, user preferences, sensory data, decisions, facts learned, or other operational data involved in agent workflows. The database automatically indexes all data without requiring schema or index management, allowing AI agents to perform complex queries quickly and efficiently.
+
+Lastly, its fully managed service eliminates the overhead of database administration, including tasks such as scaling, patching, and backups. Thus, developers can focus on building and optimizing AI agents without worrying about the underlying data infrastructure.
+
+#### Advanced features
+
+Azure Cosmos DB incorporates advanced features such as change feed, which allows tracking and responding to changes in data in real-time. This capability is useful for AI agents that need to react to new information promptly.
+
+Additionally, the built-in support for multi-master writes enables high availability and resilience, ensuring continuous operation of AI agents even in the face of regional failures.
+
+The five available [consistency levels](consistency-levels.md) (from strong to eventual) can also cater to various distributed workloads depending on the scenario requirements.
+
+> [!TIP]
+> You may choose from two Azure Cosmos DB APIs to build your AI agent memory system: Azure Cosmos DB for NoSQL, and vCore-based Azure Cosmos DB for MongoDB. The former provides 99.999% availability and [three vector search algorithms](nosql/vector-search.md): IVF, HNSW, and the state-of-the-art DiskANN. The latter provides 99.995% availability and [two vector search algorithms](mongodb/vcore/vector-search.md): IVF and HNSW.
+
+> [!div class="nextstepaction"]
+> [Use the Azure Cosmos DB lifetime free tier](free-tier.md)
+
+## Implementation sample
+
+This section explores the implementation of an autonomous agent to process traveler inquiries and bookings in a CruiseLine travel application.
+
+Chatbots have been a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language, traditionally requiring coded logic. This AI travel agent uses the LangChain Agent framework for agent planning, tool usage, and perception. Its [unified memory system](#memory-can-make-or-break-ai-agents) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings, ensuring [speed, scale, and simplicity](#infastructure-for-a-robust-memory-system). It operates within a Python FastAPI backend and support user interactions through a React JS user interface.
+
+### Prerequisites
+
+- If you don't have an Azure subscription, you may [try Azure Cosmos DB free](try-free.md) for 30 days without creating an Azure account; no credit card is required, and no commitment follows when the trial period ends.
+- Set up account for OpenAI API or Azure OpenAI Service.
+- Create a vCore cluster in Azure Cosmos DB for MongoDB by following this [QuickStart](mongodb/vcore/quickstart-portal.md).
+- An IDE for Development, such as VS Code.
+- Python 3.11.4 installed on development environment.
+
+### Download the project
+
+All of the code and sample datasets are available on [GitHub](https://github.com/jonathanscholtes/Travel-AI-Agent-React-FastAPI-and-Cosmos-DB-Vector-Store). In this repository, you can find the following folders:
+
+- **loader**: This folder contains Python code for loading sample documents and vector embeddings in Azure Cosmos DB.
+- **api**: This folder contains Python FastAPI for Hosting Travel AI Agent.
+- **web**: The folder contains the Web Interface with React JS.
+
+### Load travel documents into Azure Cosmos DB
+
+The GitHub repository contains a Python project located in the **loader** directory intended for loading the sample travel documents into Azure Cosmos DB. This section sets up the project to load the documents.
+
+### Set up the environment for loader
+
+Set up your Python virtual environment in the **loader** directory by running the following:
+```python
+ python -m venv venv
+```
+
+Activate your environment and install dependencies in the **loader** directory:
+```python
+ venv\Scripts\activate
+ python -m pip install -r requirements.txt
+```
+
+Create a file, named **.env** in the **loader** directory, to store the following environment variables.
+```python
+ OPENAI_API_KEY="**Your Open AI Key**"
+ MONGO_CONNECTION_STRING="mongodb+srv:**your connection string from Azure Cosmos DB**"
+```
+
+### Load documents and vectors
+
+The Python file **main.py** serves as the central entry point for loading data into Azure Cosmos DB. This code processes the sample travel data from the GitHub repository, including information about ships and destinations. Additionally, it generates travel itinerary packages for each ship and destination, allowing travelers to book them using the AI agent. The CosmosDBLoader is responsible for creating collections, vector embeddings, and indexes in the Azure Cosmos DB instance.
+
+*main.py*
+```python
+from cosmosdbloader import CosmosDBLoader
+from itinerarybuilder import ItineraryBuilder
+import json
++
+cosmosdb_loader = CosmosDBLoader(DB_Name='travel')
+
+#read in ship data
+with open('documents/ships.json') as file:
+ ship_json = json.load(file)
+
+#read in destination data
+with open('documents/destinations.json') as file:
+ destinations_json = json.load(file)
+
+builder = ItineraryBuilder(ship_json['ships'],destinations_json['destinations'])
+
+# Create five itinerary pakages
+itinerary = builder.build(5)
+
+# Save itinerary packages to Cosmos DB
+cosmosdb_loader.load_data(itinerary,'itinerary')
+
+# Save destinations to Cosmos DB
+cosmosdb_loader.load_data(destinations_json['destinations'],'destinations')
+
+# Save ships to Cosmos DB, create vector store
+collection = cosmosdb_loader.load_vectors(ship_json['ships'],'ships')
+
+# Add text search index to ship name
+collection.create_index([('name', 'text')])
+```
+
+Load the documents, vectors and create indexes by simply executing the following command from the loader directory:
+```python
+ python main.py
+```
+
+Output:
+
+```markdown
+--build itinerary--
+--load itinerary--
+--load destinations--
+--load vectors ships--
+```
+
+### Build travel AI agent with Python FastAPI
+
+The AI travel agent is hosted in a backend API using Python FastAPI, facilitating integration with the frontend user interface. The API project processes agent requests by [grounding](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857) the LLM prompts against the data layer, specifically the vectors and documents in Azure Cosmos DB. Furthermore, the agent makes use of various tools, particularly the Python functions provided at the API service layer. This article focuses on the code necessary for AI agents within the API code.
+
+The API project in the GitHub repository is structured as follows:
+
+- Model ΓÇô data modeling components using Pydantic models.
+- Web ΓÇô web layer components responsible for routing requests and managing communication.
+- Service ΓÇô service layer components responsible for primary business logic and interaction with data layer; LangChain Agent and Agent Tools.
+- Data ΓÇô data layer components responsible for interacting with Azure Cosmos DB for MongoDB documents storage and vector search.
+
+### Set up the environment for the API
+
+Python version 3.11.4 was utilized for the development and testing of the API.
+
+Set up your python virtual environment in the **api** directory.
+```python
+ python -m venv venv
+```
+
+Activate your environment and install dependencies using the requirements file in the **api** directory:
+```python
+ venv\Scripts\activate
+ python -m pip install -r requirements.txt
+```
+
+Create a file, named **.env** in the **api** directory, to store your environment variables.
+```python
+ OPENAI_API_KEY="**Your Open AI Key**"
+ MONGO_CONNECTION_STRING="mongodb+srv:**your connection string from Azure Cosmos DB**"
+```
+
+With the environment configured and variables set up, we are ready to initiate the FastAPI server. Run the following command from the **api** directory to initiate the server.
+```python
+ python app.py
+```
+
+The FastAPI server launches on the localhost loopback 127.0.0.1 port 8000 by default. You can access the Swagger documents using the following localhost address: http://127.0.0.1:8000/docs
+
+### Use a session for the AI agent memory
+It is imperative for the Travel Agent to have the capability to reference previously provided information within the ongoing conversation. This ability is commonly known as "memory" in the context of LLMs, which should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
+
+To achieve this objective, we use the chat message history, which is securely stored in our Azure Cosmos DB instance. Each chat session will have its history stored using a session ID to ensure that only messages from the current conversation session are accessible. This necessity is the reason behind the existence of a 'Get Session' method in our API. It is a placeholder method for managing web sessions in order to illustrate the use of chat message history.
+
+Click Try It out for /session/.
+
+```python
+{
+ "session_id": "0505a645526f4d68a3603ef01efaab19"
+}
+```
+
+For the AI Agent, we only need to simulate a session. Thus, the stubbed-out method merely returns a generated session ID for tracking message history. In a practical implementation, this session would be stored in Azure Cosmos DB and potentially in React JS localStorage.
+
+*web/session.py*
+```python
+ @router.get("/")
+ def get_session():
+ return {'session_id':str(uuid.uuid4().hex)}
+```
+
+### Start a conversation with the AI travel agent
+
+Let us utilize the obtained session ID from the previous step to initiate a new dialogue with our AI agent to validate its functionality. We shall conduct our test by submitting the following phrase: "I want to take a relaxing vacation."
+
+Click Try It out for /agent/agent_chat.
+
+Example parameter
+```python
+{
+ "input": "I want to take a relaxing vacation.",
+ "session_id": "0505a645526f4d68a3603ef01efaab19"
+}
+```
+
+The initial execution results in a recommendation for the Tranquil Breeze Cruise and the Fantasy Seas Adventure Cruise as they are anticipated to be the most 'relaxing' cruises available through the vector search. These documents have the highest score for ```similarity_search_with_score``` that is called in the data layer of our API, ```data.mongodb.travel.similarity_search()```.
+
+The similarity search scores are displayed as output from the API for debugging purposes.
+
+Output when calling ```data.mongodb.travel.similarity_search()```
+
+```markdown
+0.8394561085977978
+0.8086545112328692
+2
+```
+
+> [!TIP]
+> If documents are not being returned for vector search modify the ```similarity_search_with_score``` limit or the score filter value as needed (```[doc for doc, score in docs if score >=.78]```). in ```data.mongodb.travel.similarity_search()```
+
+Calling the 'agent_chat' for the first time creates a new collection named 'history' in Azure Cosmos DB to store the conversation by session. This call enables the agent to access the stored chat message history as needed. Subsequent executions of 'agent_chat' with the same parameters produce varying results as it draws from memory.
+
+### Walkthrough of AI agent
+
+When integrating the AI Agent into the API, the web search components are responsible for initiating all requests. This is followed by the search service, and finally the data components. In our specific case, we utilize MongoDB data search, which connects to Azure Cosmos DB. The layers facilitate the exchange of Model components, with the AI Agent and AI Agent Tool code residing in the service layer. This approach was implemented to enable the seamless interchangeability of data sources and to extend the capabilities of the AI Agent with additional, more intricate functionalities or 'tools'.
++
+#### Service layer
+
+The service layer forms the cornerstone of our core business logic. In this particular scenario, the service layer plays a crucial role as the repository for the LangChain agent code, facilitating the seamless integration of user prompts with Azure Cosmos DB data, conversation memory, and agent functions for our AI Agent.
+
+The service layer employs a singleton pattern module for handling agent-related initializations in the **init.py** file.
+
+*service/init.py*
+```python
+from dotenv import load_dotenv
+from os import environ
+from langchain.globals import set_llm_cache
+from langchain_openai import ChatOpenAI
+from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory
+from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
+from langchain_core.runnables.history import RunnableWithMessageHistory
+from langchain.agents import AgentExecutor, create_openai_tools_agent
+from service import TravelAgentTools as agent_tools
+
+load_dotenv(override=True)
++
+chat : ChatOpenAI | None=None
+agent_with_chat_history : RunnableWithMessageHistory | None=None
+
+def LLM_init():
+ global chat,agent_with_chat_history
+ chat = ChatOpenAI(model_name="gpt-3.5-turbo-16k",temperature=0)
+ tools = [agent_tools.vacation_lookup, agent_tools.itinerary_lookup, agent_tools.book_cruise ]
+
+ prompt = ChatPromptTemplate.from_messages(
+ [
+ (
+ "system",
+ "You are a helpful and friendly travel assistant for a cruise company. Answer travel questions to the best of your ability providing only relevant information. In order to book a cruise you will need to capture the person's name.",
+ ),
+ MessagesPlaceholder(variable_name="chat_history"),
+ ("user", "Answer should be embedded in html tags. {input}"),
+ MessagesPlaceholder(variable_name="agent_scratchpad"),
+ ]
+ )
+
+ #Answer should be embedded in html tags. Only answer questions related to cruise travel, If you can not answer respond with \"I am here to assist with your travel questions.\".
++
+ agent = create_openai_tools_agent(chat, tools, prompt)
+ agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
+
+ agent_with_chat_history = RunnableWithMessageHistory(
+ agent_executor,
+ lambda session_id: MongoDBChatMessageHistory( database_name="travel",
+ collection_name="history",
+ connection_string=environ.get("MONGO_CONNECTION_STRING"),
+ session_id=session_id),
+ input_messages_key="input",
+ history_messages_key="chat_history",
+)
+
+LLM_init()
+```
+
+The **init.py** file commences by initiating the loading of environment variables from a **.env** file utilizing the ```load_dotenv(override=True)``` method. Then, a global variable named ```agent_with_chat_history``` is instantiated for the agent, intended for use by our **TravelAgent.py**. The ```LLM_init()``` method is invoked during module initialization to configure our AI agent for conversation via the API web layer. The OpenAI Chat object is instantiated using the GPT-3.5 model, incorporating specific parameters such as model name and temperature. The chat object, tools list, and prompt template are combined to generate an ```AgentExecutor```, which operates as our AI Travel Agent. Lastly, the agent with history, ```agent_with_chat_history```, is established using ```RunnableWithMessageHistory``` with chat history (MongoDBChatMessageHistory), enabling it to maintain a complete conversation history via Azure Cosmos DB.
+
+#### Prompt
+
+The LLM prompt initially began with the simple statement "You are a helpful and friendly travel assistant for a cruise company." However, through testing, it was determined that more consistent results could be obtained by including the instruction "Answer travel questions to the best of your ability, providing only relevant information. To book a cruise, capturing the person's name is essential." The results are presented in HTML format to enhance the visual appeal within the web interface.
+
+#### Agent tools
+[Tools](#what-are-ai-agents) are interfaces that an agent can use to interact with the world, often done through function calling.
+
+When creating an agent, it is essential to furnish it with a set of tools that it can utilize. The ```@tool``` decorator offers the most straightforward approach to defining a custom tool. By default, the decorator uses the function name as the tool name, although this can be replaced by providing a string as the first argument. Moreover, the decorator will utilize the function's docstring as the tool's description, thus requiring the provision of a docstring.
+
+*service/TravelAgentTools.py*
+```python
+from langchain_core.tools import tool
+from langchain.docstore.document import Document
+from data.mongodb import travel
+from model.travel import Ship
++
+@tool
+def vacation_lookup(input:str) -> list[Document]:
+ """find information on vacations and trips"""
+ ships: list[Ship] = travel.similarity_search(input)
+ content = ""
+
+ for ship in ships:
+ content += f" Cruise ship {ship.name} description: {ship.description} with amenities {'/n-'.join(ship.amenities)} "
+
+ return content
+
+@tool
+def itinerary_lookup(ship_name:str) -> str:
+ """find ship itinerary, cruise packages and destinations by ship name"""
+ it = travel.itnerary_search(ship_name)
+ results = ""
+
+ for i in it:
+ results += f" Cruise Package {i.Name} room prices: {'/n-'.join(i.Rooms)} schedule: {'/n-'.join(i.Schedule)}"
+
+ return results
++
+@tool
+def book_cruise(package_name:str, passenger_name:str, room: str )-> str:
+ """book cruise using package name and passenger name and room """
+ print(f"Package: {package_name} passenger: {passenger_name} room: {room}")
+
+ # LLM defaults empty name to John Doe
+ if passenger_name == "John Doe":
+ return "In order to book a cruise I need to know your name."
+ else:
+ if room == '':
+ return "which room would you like to book"
+ return "Cruise has been booked, ref number is 343242"
+```
+
+In the **TravelAgentTools.py** file, three specific tools are defined. The first tool, ```vacation_lookup```, conducts a vector search against Azure Cosmos DB, using a ```similarity_search``` to retrieve relevant travel-related material. The second tool, ```itinerary_lookup```, retrieves cruise package details and schedules for a specified cruise ship. Lastly, ```book_cruise``` is responsible for booking a cruise package for a passenger. Specific instructions ("In order to book a cruise I need to know your name.") might be necessary to ensure the capture of the passenger's name and room number for booking the cruise package. This is in spite of including such instructions in the LLM prompt.
+
+#### AI agent
+
+The fundamental concept underlying agents is to utilize a language model for selecting a sequence of actions to execute.
+
+*service/TravelAgent.py*
+```python
+from .init import agent_with_chat_history
+from model.prompt import PromptResponse
+import time
+from dotenv import load_dotenv
+
+load_dotenv(override=True)
++
+def agent_chat(input:str, session_id:str)->str:
+
+ start_time = time.time()
+
+ results=agent_with_chat_history.invoke(
+ {"input": input},
+ config={"configurable": {"session_id": session_id}},
+ )
+
+ return PromptResponse(text=results["output"],ResponseSeconds=(time.time() - start_time))
+```
+
+The **TravelAgent.py** file is straightforward, as ```agent_with_chat_history```, and its dependencies (tools, prompt, and LLM) are initialized and configured in the **init.py** file. In this file, the agent is called using the input received from the user, along with the session ID for conversation memory. Afterwards, ```PromptResponse``` (model/prompt) is returned with the agent's output and response time.
+
+### Integrate AI agent with React JS user interface
+
+With the successful loading of the data and accessibility of our AI Agent through our API, we can now complete the solution by establishing a web user interface using React JS for our travel website. By harnessing the capabilities of React JS, we can illustrate the seamless integration of our AI agent into a travel site, enhancing the user experience with a conversational travel assistant for inquiries and bookings.
+
+#### Set up the environment for React JS
+
+Install Node.js and the dependencies before testing out the React interface.
+
+Run the following command from the **web** directory to perform a clean install of project dependencies, this may take some time.
+```javascript
+ npm ci
+```
+
+Next, it is essential to create a file named **.env** within the **web** directory to facilitate the storage of environment variables. Then, you should include the following details in the newly created **.env** file.
+
+REACT_APP_API_HOST=http://127.0.0.1:8000
+
+Now, we have the ability to execute the following command from the **web** directory to initiate the React web user interface.
+```javascript
+ npm start
+```
+
+Running the previous command launches the React JS web application.
+
+#### Walkthrough of React JS Web interface
+
+The web project of the GitHub repository is a straightforward application to facilitate user interaction with our AI agent. The primary components required to converse with the agent are ```TravelAgent.js``` and ```ChatLayout.js```. The **Main.js** file serves as the central module or user landing page.
++
+#### Main
+
+The Main component serves as the central manager of the application, acting as the designated entry point for routing. Within the render function, it produces JSX code to delineate the main page layout. This layout encompasses placeholder elements for the application such as logos and links, a section housing the travel agent component (further details to come), and a footer containing a sample disclaimer regarding the application's nature.
+
+*main.js*
+```javascript
+ import React, { Component } from 'react'
+import { Stack, Link, Paper } from '@mui/material'
+import TravelAgent from './TripPlanning/TravelAgent'
+
+import './Main.css'
+
+class Main extends Component {
+ constructor() {
+ super()
+
+ }
+
+ render() {
+ return (
+ <div className="Main">
+ <div className="Main-Header">
+ <Stack direction="row" spacing={5}>
+ <img src="/mainlogo.png" alt="Logo" height={'120px'} />
+ <Link
+ href="#"
+ sx={{ color: 'white', fontWeight: 'bold', fontSize: 18 }}
+ underline="hover"
+ >
+ Ships
+ </Link>
+ <Link
+ href="#"
+ sx={{ color: 'white', fontWeight: 'bold', fontSize: 18 }}
+ underline="hover"
+ >
+ Destinations
+ </Link>
+ </Stack>
+ </div>
+ <div className="Main-Body">
+ <div className="Main-Content">
+ <Paper elevation={3} sx={{p:1}} >
+ <Stack
+ direction="row"
+ justifyContent="space-evenly"
+ alignItems="center"
+ spacing={2}
+ >
+
+ <Link href="#">
+ <img
+ src={require('./images/destinations.png')} width={'400px'} />
+ </Link>
+ <TravelAgent ></TravelAgent>
+ <Link href="#">
+ <img
+ src={require('./images/ships.png')} width={'400px'} />
+ </Link>
+
+ </Stack>
+ </Paper>
+ </div>
+ </div>
+ <div className="Main-Footer">
+ <b>Disclaimer: Sample Application</b>
+ <br />
+ Please note that this sample application is provided for demonstration
+ purposes only and should not be used in production environments
+ without proper validation and testing.
+ </div>
+ </div>
+ )
+ }
+}
+
+export default Main
+```
+
+#### Travel agent
+
+The Travel Agent component has a straightforward purpose ΓÇô capturing user inputs and displaying responses. It plays a key role in managing the integration with the backend AI Agent, primarily by capturing sessions and forwarding user prompts to our FastAPI service. The resulting responses are stored in an array for display, facilitated by the Chat Layout component.
+
+*TripPlanning/TravelAgent.js*
+```javascript
+import React, { useState, useEffect } from 'react'
+import { Button, Box, Link, Stack, TextField } from '@mui/material'
+import SendIcon from '@mui/icons-material/Send'
+import { Dialog, DialogContent } from '@mui/material'
+import ChatLayout from './ChatLayout'
+import './TravelAgent.css'
+
+export default function TravelAgent() {
+ const [open, setOpen] = React.useState(false)
+ const [session, setSession] = useState('')
+ const [chatPrompt, setChatPrompt] = useState(
+ 'I want to take a relaxing vacation.',
+ )
+ const [message, setMessage] = useState([
+ {
+ message: 'Hello, how can I assist you today?',
+ direction: 'left',
+ bg: '#E7FAEC',
+ },
+ ])
+
+ const handlePrompt = (prompt) => {
+ setChatPrompt('')
+ setMessage((message) => [
+ ...message,
+ { message: prompt, direction: 'right', bg: '#E7F4FA' },
+ ])
+ console.log(session)
+ fetch(process.env.REACT_APP_API_HOST + '/agent/agent_chat', {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ },
+ body: JSON.stringify({ input: prompt, session_id: session }),
+ })
+ .then((response) => response.json())
+ .then((res) => {
+ setMessage((message) => [
+ ...message,
+ { message: res.text, direction: 'left', bg: '#E7FAEC' },
+ ])
+ })
+ }
+
+ const handleSession = () => {
+ fetch(process.env.REACT_APP_API_HOST + '/session/')
+ .then((response) => response.json())
+ .then((res) => {
+ setSession(res.session_id)
+ })
+ }
+
+ const handleClickOpen = () => {
+ setOpen(true)
+ }
+
+ const handleClose = (value) => {
+ setOpen(false)
+ }
+
+ useEffect(() => {
+ if (session === '') handleSession()
+ }, [])
+
+ return (
+ <Box>
+ <Dialog onClose={handleClose} open={open} maxWidth="md" fullWidth="true">
+ <DialogContent>
+ <Stack>
+ <Box sx={{ height: '500px' }}>
+ <div className="AgentArea">
+ <ChatLayout messages={message} />
+ </div>
+ </Box>
+ <Stack direction="row" spacing={0}>
+ <TextField
+ sx={{ width: '80%' }}
+ variant="outlined"
+ label="Message"
+ helperText="Chat with AI Travel Agent"
+ defaultValue="I want to take a relaxing vacation."
+ value={chatPrompt}
+ onChange={(event) => setChatPrompt(event.target.value)}
+ ></TextField>
+ <Button
+ variant="contained"
+ endIcon={<SendIcon />}
+ sx={{ mb: 3, ml: 3, mt: 1 }}
+ onClick={(event) => handlePrompt(chatPrompt)}
+ >
+ Submit
+ </Button>
+ </Stack>
+ </Stack>
+ </DialogContent>
+ </Dialog>
+ <Link href="#" onClick={() => handleClickOpen()}>
+ <img src={require('.././images/planvoyage.png')} width={'400px'} />
+ </Link>
+ </Box>
+ )
+}
+```
+
+Click on "Effortlessly plan your voyage" to launch the travel assistant.
+
+#### Chat layout
+
+The Chat Layout component, as indicated by its name, oversees the arrangement of the chat. It systematically processes the chat messages and implements the designated formatting specified in the message JSON object.
+
+*TripPlanning/ChatLayout.py*
+```javascript
+import React from 'react'
+import { Box, Stack } from '@mui/material'
+import parse from 'html-react-parser'
+import './ChatLayout.css'
+
+export default function ChatLayout(messages) {
+ return (
+ <Stack direction="column" spacing="1">
+ {messages.messages.map((obj, i = 0) => (
+ <div className="bubbleContainer" key={i}>
+ <Box
+ key={i++}
+ className="bubble"
+ sx={{ float: obj.direction, fontSize: '10pt', background: obj.bg }}
+ >
+ <div>{parse(obj.message)}</div>
+ </Box>
+ </div>
+ ))}
+ </Stack>
+ )
+}
+```
+
+User prompts are on the right side and colored blue, while the Travel AI Agent responses are on the left side and colored green. As you can see in the image below, the HTML formatted responses are accounted for in the conversation.
+
+When your AI agent is ready go to into production, you can use semantic caching to improve query performance by 80% and reduce LLM inference/API call costs. See this blog post for how to implement [semantic caching](https://stochasticcoder.com/2024/03/22/improve-llm-performance-using-semantic-cache-with-cosmos-db/).
+
+> [!NOTE]
+> If you would like to contribute to this article, feel free to click on the pencil button on the top right corner of the article. If you have any specific questions or comments on this article, you may reach out to cosmosdbgenai@microsoft.com
+
+### Next steps
+
+[30-day Free Trial without Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
+
+[90-day Free Trial and up to $6,000 in throughput credits with Azure AI Advantage](ai-advantage.md)
+
+> [!div class="nextstepaction"]
+> [Use the Azure Cosmos DB lifetime free tier](free-tier.md)
cosmos-db Manage Data Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet-core.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
## Prerequisites In addition, you need: * Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
cosmos-db Manage Data Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-dotnet.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
## Prerequisites In addition, you need: * Latest [!INCLUDE [cosmos-db-visual-studio](../includes/cosmos-db-visual-studio.md)]
cosmos-db Manage Data Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/manage-data-nodejs.md
In this quickstart, you create an Azure Cosmos DB for Apache Cassandra account,
## Prerequisites In addition, you need:
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium Post
### Set up PostgreSQL database if you haven't already. This could be an existing on-premises database or you could [download and install one](https://www.postgresql.org/download/) on your local machine. It's also possible to use a [Docker container](https://hub.docker.com/_/postgres). To start a container:
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
The following graphic illustrates the strong consistency with musical notes. Aft
:::image type="content" source="media/consistency-levels/strong-consistency.gif" alt-text="Animation of strong consistency level using musical notes that are always synced.":::
+#### Dynamic quorum
+
+Under normal circumstances, for an account with strong consistency, a write is considered committed when all regions acknowledge that the record has been replicated to it. However, for accounts with 3 regions or more (including the write region), the system can "downshift" the quorum of regions to a global majority in cases where some regions are either unresponsive or responding slowly. At that point, unresponsive regions are taken out of the quorum set of regions in order to preserve strong consistency. They will only be added back once they are consistent with other regions and are performing as expected. The number of regions that can potentially be taken out of the quorum set will depend on the total number of regions. For example, in a 3 or 4 region account, the majority is 2 or 3 regions respectively, so only 1 region can be removed in either case. For a 5 region account, the majority is 3, so up to 2 unresponsive regions can be removed. This capability is known as "dynamic quorum" and can improve both write availability and replication latency for accounts with 3 or more regions.
+
+> [!NOTE]
+> When regions are removed from the quorum set as part of dynamic quorum, those regions are no longer able to serve reads until re-added into the quorum.
+ ### Bounded staleness consistency For single-region write accounts with two or more regions, data is replicated from the primary region to all secondary (read-only) regions. For multi-region write accounts with two or more regions, data is replicated from the region it was originally written in to all other writable regions. In both scenarios, while not common, there may occasionally be a replication lag from one region to another.
cosmos-db Quickstart Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-console.md
In this quickstart, you use the Gremlin console to connect to a newly created Az
- Don't have Docker installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1). - [Azure Command-Line Interface (CLI)](/cli/azure/) ## Create an API for Gremlin account and relevant resources
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-dotnet.md
In this quickstart, you use the `Gremlin.Net` library to connect to a newly crea
- Don't have .NET installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1). - [Azure Command-Line Interface (CLI)](/cli/azure/) ## Setting up
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-nodejs.md
In this quickstart, you use the `gremlin` library to connect to a newly created
- Don't have Node.js installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1).codespaces.new/github/codespaces-blank?quickstart=1) - [Azure Command-Line Interface (CLI)](/cli/azure/) ## Setting up
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-python.md
In this quickstart, you use the `gremlinpython` library to connect to a newly cr
- Don't have Python installed? Try this quickstart in [GitHub Codespaces](https://codespaces.new/github/codespaces-blank?quickstart=1). - [Azure Command-Line Interface (CLI)](/cli/azure/) ## Setting up
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
To ensure that you have access to Azure Cosmos DB metrics from the portal, you n
## <a id="configure-using-powershell"></a>Configure a service endpoint by using Azure PowerShell Use the following steps to configure a service endpoint to an Azure Cosmos DB account by using Azure PowerShell:
cosmos-db How To Setup Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys.md
Data stored in your Azure Cosmos DB account is automatically and seamlessly encr
This article walks through how to configure encryption with customer-managed keys at the time that you create an Azure Cosmos DB account. In this example cross-tenant scenario, the Azure Cosmos DB account resides in a tenant managed by an Independent Software Vendor (ISV) referred to as the service provider. The key used for encryption of the Azure Cosmos DB account resides in a key vault in a different tenant that is managed by the customer. ## Create a new Azure Cosmos DB account encrypted with a key from a different tenant
cosmos-db Connect Using Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/connect-using-mongoose.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service
## Prerequisites [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cosmos-db Tutorial Develop Nodejs Part 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-4.md
Before starting this part of the tutorial, ensure you've completed the steps in
In this tutorial section, you can either use the Azure Cloud Shell (in your internet browser) or [the Azure CLI](/cli/azure/install-azure-cli) installed locally. [!INCLUDE [Log in to Azure](../includes/login-to-azure.md)] > [!TIP] > This tutorial walks you through the steps to build the application step-by-step. If you want to download the finished project, you can get the completed application from the [angular-cosmosdb repo](https://github.com/Azure-Samples/angular-cosmosdb) on GitHub.
cosmos-db How To Create Wildcard Indexes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-create-wildcard-indexes.md
+
+ Title: Wildcard indexes in Azure Cosmos DB for MongoDB vCore
+
+description: Sample to create wildcard indexes in Azure Cosmos DB for MongoDB vCore.
++++++ Last updated : 6/25/2024+++
+# Create wildcard indexes in Azure Cosmos DB for MongoDB vCore
++
+While most workloads have a predictable set of fields used in query filters and predicates, adhoc query patterns may use filters on any field in the json document structure.
+
+Wildcard indexing can be helpful in the following scenarios:
+- Queries filtering on any field in the document making indexing all fields through a single command easier than indexing each field individually.
+- Queries filtering on most fields in the document making indexing all but a few fields through a single easier than indexing most fields individually.
+
+This sample describes a simple workaround to minimize the effort needed to create individual indexes until wildcard indexing is generally available in Azure Cosmos DB for MongoDB vCore.
+
+## Solution
+Consider the json document below:
+```json
+{
+ "firstName": "Steve",
+ "lastName": "Smith",
+ "companyName": "Microsoft",
+ "division": "Azure",
+ "subDivision": "Data & AI",
+ "timeInOrgInYears": 7,
+ "roles": [
+ {
+ "teamName" : "Windows",
+ "teamSubName" "Operating Systems",
+ "timeInTeamInYears": 3
+ },
+ {
+ "teamName" : "Devices",
+ "teamSubName" "Surface",
+ "timeInTeamInYears": 2
+ },
+ {
+ "teamName" : "Devices",
+ "teamSubName" "Surface",
+ "timeInTeamInYears": 2
+ }
+ ]
+}
+```
+
+The following indices are created under the covers when wildcard indexing is used.
+- db.collection.createIndex({"firstName", 1})
+- db.collection.createIndex({"lastName", 1})
+- db.collection.createIndex({"companyName", 1})
+- db.collection.createIndex({"division", 1})
+- db.collection.createIndex({"subDivision", 1})
+- db.collection.createIndex({"timeInOrgInYears", 1})
+- db.collection.createIndex({"subDivision", 1})
+- db.collection.createIndex({"roles.teamName", 1})
+- db.collection.createIndex({"roles.teamSubName", 1})
+- db.collection.createIndex({"roles.timeInTeamInYears", 1})
+
+While this sample document only requires a combination of 10 fields to be explicitly indexed, larger documents with hundreds or thousands of fields can get tedious and error prone when indexing fields individually.
+
+The jar file detailed in the rest of this document makes indexing fields in larger documents simpler. The jar takes a sample JSON document as input, parses the document and executes createIndex commands for each field without the need for user intervention.
+
+## Prerequisites
+
+### Java 21
+After the virtual machine is deployed, use SSH to connect to the machine, and install CQLSH using the below commands:
+
+```bash
+# Install default-jdk
+sudo apt update
+sudo apt install openjdk-21-jdk
+```
+
+## Sample jar to create individual indexes for all fields
+
+Clone the repository containing the Java sample to iterate through each field in the JSON document's structure and issue createIndex operations for each field in the document.
+
+```bash
+git clone https://github.com/Azure-Samples/cosmosdb-mongodb-vcore-wildcard-indexing.git
+```
+
+The cloned repository does not need to be built if there are no changes to be made to the solution. The built runnable jar named azure-cosmosdb-mongo-data-indexer-1.0-SNAPSHOT.jar is already included in the runnableJar/ folder. The jar can be executed by specifying the following required parameters:
+- Azure Cosmos DB for MongoDB vCore cluster connection string with the username and password used when the cluster was provisioned
+- The Azure Cosmos DB for MongoDB vCore database
+- The collection to be indexed
+- The location of the json file with the document structure for the collection. This document is parsed by the jar file to extract every field and issue individual createIndex operations.
+
+```bash
+java -jar azure-cosmosdb-mongo-data-indexer-1.0-SNAPSHOT.jar mongodb+srv://<user>:<password>@abinav-test-benchmarking.global.mongocluster.cosmos.azure.com/?tls=true&authMechanism=SCRAM-SHA-256&retrywrites=false&maxIdleTimeMS=120000 cosmicworks employee sampleEmployee.json
+```
+
+## Track the status of a createIndex operation
+The jar file is designed to not wait on a response from each createIndex operation. The indexes are created asynchronously on the server and the progress of the index build operation on the cluster can be tracked.
+
+Consider this sample to track indexing progress on the 'cosmicworks' database.
+```javascript
+use cosmicworks;
+db.currentOp()
+```
+
+When a createIndex operation is in progress, the response looks like:
+```json
+{
+ "inprog": [
+ {
+ "shard": "defaultShard",
+ "active": true,
+ "type": "op",
+ "opid": "30000451493:1719209762286363",
+ "op_prefix": 30000451493,
+ "currentOpTime": "2024-06-24T06:16:02.000Z",
+ "secs_running": 0,
+ "command": { "aggregate": "" },
+ "op": "command",
+ "waitingForLock": false
+ },
+ {
+ "shard": "defaultShard",
+ "active": true,
+ "type": "op",
+ "opid": "30000451876:1719209638351743",
+ "op_prefix": 30000451876,
+ "currentOpTime": "2024-06-24T06:13:58.000Z",
+ "secs_running": 124,
+ "command": { "createIndexes": "" },
+ "op": "workerCommand",
+ "waitingForLock": false,
+ "progress": {},
+ "msg": ""
+ }
+ ],
+ "ok": 1
+}
+```
+
+## Related content
+
+Check out the full sample here - https://github.com/Azure-Samples/cosmosdb-mongodb-vcore-wildcard-indexing
+
+Check out [indexing best practices](how-to-create-indexes.md), which details best practices for indexing on Azure Cosmos DB for MongoDB vCore.
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Title: Monitor data by using Azure Diagnostic settings
+ Title: Monitor data using diagnostic settings
description: Learn how to use Azure diagnostic settings to monitor the performance and availability of data stored in Azure Cosmos DB---++ Previously updated : 04/26/2023 Last updated : 06/27/2024
+#Customer Intent: As an operations user, I want to monitor metrics using Azure Monitor, so that I can use a Log Analytics workspace to perform complex analysis.
-# Monitor Azure Cosmos DB data by using diagnostic settings in Azure
+# Monitor Azure Cosmos DB data using Azure Monitor Log Analytics diagnostic settings
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
-Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as "data plane logs." Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
+Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and are referred to as "data plane logs." Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type.
Platform metrics and the Activity logs are collected automatically, whereas you must create a diagnostic setting to collect resource logs or forward them outside of Azure Monitor. You can turn on diagnostic setting for Azure Cosmos DB accounts and send resource logs to the following sources: -- Log Analytics workspaces
+- Azure Monitor Log Analytics workspaces
- Data sent to Log Analytics can be written into **Azure Diagnostics (legacy)** or **Resource-specific (preview)** tables - Event hub - Storage Account
Platform metrics and the Activity logs are collected automatically, whereas you
- If you have an Azure subscription, [create a new account](nosql/how-to-create-account.md?tabs=azure-portal). - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Alternatively, you can [try Azure Cosmos DB free](try-free.md) before you commit.
+- An existing Azure Monitor Log Analytics workspace.
## Create diagnostic settings
Here, we walk through the process of creating diagnostic settings for your accou
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to your Azure Cosmos DB account. Open the **Diagnostic settings** pane under the **Monitoring section** and then select the **Add diagnostic setting** option.
+1. Navigate to your existing Azure Cosmos DB account.
+1. In the **Monitoring** section of the resource menu, select **Diagnostic settings**. Then, select the **Add diagnostic setting** option.
- :::image type="content" source="media/monitor/diagnostics-settings-selection.png" lightbox="media/monitor/diagnostics-settings-selection.png" alt-text="Sreenshot of the diagnostics selection page.":::
+ :::image type="content" source="media/monitor-resource-logs/add-diagnostic-setting.png" lightbox="media/monitor-resource-logs/add-diagnostic-setting.png" alt-text="Screenshot of the list of diagnostic settings with options to create new ones or edit existing ones.":::
> [!IMPORTANT] > You might see a prompt to "enable full-text query \[...\] for more detailed logging" if the **full-text query** feature is not enabled in your account. You can safely ignore this warning if you do not wish to enable this feature. For more information, see [enable full-text query](monitor-resource-logs.md#enable-full-text-query-for-logging-query-text).
-1. In the **Diagnostic settings** pane, fill the form with your preferred categories. Included here's a list of log categories.
+1. In the **Diagnostic settings** pane, name the setting **example-setting** and then select the **QueryRuntimeStatistics** category. Send the logs to a **Log Analytics Workspace** selecting your existing workspace. Finally, select **Resource specific** as the destination option.
- | Category | API | Definition | Key Properties |
- | | | | |
- | **DataPlaneRequests** | Recommended for API for NoSQL | Logs back-end requests as data plane operations, which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
- | **MongoRequests** | API for MongoDB | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- | **CassandraRequests** | API for Apache Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. | `operationName`, `requestCharge`, `piiCommandText` |
- | **GremlinRequests** | API for Apache Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
- | **QueryRuntimeStatistics** | API for NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging persona l data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
- | **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size might not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` |
- | **PartitionKeyRUConsumption** | API for NoSQL or API for Apache Gremlin | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write, query, and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` |
- | **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
- | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table.| `operationName`, `requestCharge`, `piiCommandText` |
-
-1. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
-
- :::image type="content" source="media/monitor/diagnostics-resource-specific.png" alt-text="Screenshot of the option to enable resource-specific diagnostics.":::
+ :::image type="content" source="media/monitor-resource-logs/configure-diagnostic-setting.png" alt-text="Screenshot of the various options to configure a diagnostic setting.":::
### [Azure CLI](#tab/azure-cli)
-Use the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with the Azure CLI. See the documentation for this command for descriptions of its parameters.
-
-> [!NOTE]
-> If you are using API for NoSQL, we recommend setting the **export-to-resource-specific** property to **true**.
-
-1. Create shell variables for `subscriptionId`, `diagnosticSettingName`, `workspaceName` and `resourceGroupName`.
+Use the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command to create a diagnostic setting with the Azure CLI. See the documentation for this command for descriptions of its parameters.
- ```azurecli
- # Variable for subscription id
- subscriptionId="<subscription-id>"
-
- # Variable for resource group name
- resourceGroupName="<resource-group-name>"
-
- # Variable for workspace name
- workspaceName="<workspace-name>"
-
- # Variable for diagnostic setting name
- diagnosticSettingName="<diagnostic-setting-name>"
- ```
+1. Ensure you logged in to the Azure CLI. For more information, see [sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
1. Use `az monitor diagnostic-settings create` to create the setting.
- ```azurecli
+ ```azurecli-interactive
az monitor diagnostic-settings create \
- --resource "/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.DocumentDb/databaseAccounts/" \
- --name $diagnosticSettingName \
- --export-to-resource-specific true \
- --logs '[{"category": "QueryRuntimeStatistics","categoryGroup": null,"enabled": true,"retentionPolicy": {"enabled": false,"days": 0}}]' \
- --workspace "/subscriptions/$subscriptionId/resourcegroups/$resourceGroupName/providers/microsoft.operationalinsights/workspaces/$workspaceName"
+ --resource $(az cosmosdb show \
+ --resource-group "<resource-group-name>" \
+ --name "<account-name>" \
+ --query "id" \
+ --output "tsv" \
+ ) \
+ --workspace $(az monitor log-analytics workspace show \
+ --resource-group "<resource-group-name>" \
+ --name "<account-name>" \
+ --query "id" \
+ --output "tsv" \
+ ) \
+ --name "example-setting" \
+ --export-to-resource-specific true \
+ --logs '[
+ {
+ "category": "QueryRuntimeStatistics",
+ "enabled": true
+ }
+ ]'
+ ```
+
+ > [!IMPORTANT]
+ > This sample uses the `--export-to-resource-specific` argument to enable resource-specific tables.
+
+1. Review the results of creating your new setting using `az monitor diagnostics-settings show`.
+
+ ```azurecli-interactive
+ az monitor diagnostic-settings show \
+ --name "example-setting" \
+ --resource $(az cosmosdb show \
+ --resource-group "<resource-group-name>" \
+ --name "<account-name>" \
+ --query "id" \
+ --output "tsv" \
+ )
``` ### [REST API](#tab/rest-api) Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorupdate) for creating a diagnostic setting via the interactive console.
-> [!NOTE]
-> We recommend setting the **logAnalyticsDestinationType** property to **Dedicated** for enabling resource specific tables.
+1. Ensure you logged in to the Azure CLI. For more information, see [sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
-1. Create an HTTP `PUT` request.
+1. Create the diagnostic setting for your Azure Cosmos DB resource using an HTTP `PUT` request and [`az rest`](/cli/azure/reference-index#az-rest).
- ```HTTP
- PUT
- https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnosticSettings/service?api-version={api-version}
+ ```azurecli-interactive
+ diagnosticSettingName="example-setting"
+
+ resourceId=$(az cosmosdb show \
+ --resource-group "<resource-group-name>" \
+ --name "<account-name>" \
+ --query "id" \
+ --output "tsv" \
+ )
+
+ workspaceId=$(az monitor log-analytics workspace show \
+ --resource-group "<resource-group-name>" \
+ --name "<account-name>" \
+ --query "id" \
+ --output "tsv" \
+ )
+
+ az rest \
+ --method "PUT" \
+ --url "$resourceId/providers/Microsoft.Insights/diagnosticSettings/$diagnosticSettingName" \
+ --url-parameters "api-version=2021-05-01-preview" \
+ --body '{
+ "properties": {
+ "workspaceId": "'"$workspaceId"'",
+ "logs": [
+ {
+ "category": "QueryRuntimeStatistics",
+ "enabled": true
+ }
+ ],
+ "logAnalyticsDestinationType": "Dedicated"
+ }
+ }'
```
-1. Use these headers with the request.
+ > [!IMPORTANT]
+ > This sample sets the `logAnalyticsDestinationType` property to `Dedicated` to enable resource-specific tables.
- | Parameters/Headers | Value/Description |
- | | |
- | **name** | The name of your diagnostic setting. |
- | **resourceUri** | Microsoft Insights subresource URI for Azure Cosmos DB account. |
- | **api-version** | `2017-05-01-preview` |
- | **Content-Type** | `application/json` |
+1. Use `az rest` again with an HTTP `GET` verb to get the properties of the diagnostic setting.
- > [!NOTE]
- > The URI for the Microsoft Insights subresource is in this format: `subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME}`. For more information about Azure Cosmos DB resource URIs, see [resource URI syntax for Azure Cosmos DB REST API](/rest/api/cosmos-db/cosmosdb-resource-uri-syntax-for-rest).
+ ```azurecli-interactive
+ diagnosticSettingName="example-setting"
-1. Set the body of the request to this JSON payload.
+ resourceId=$(az cosmosdb show \
+ --resource-group "<resource-group-name>" \
+ --name "<account-name>" \
+ --query "id" \
+ --output "tsv" \
+ )
+
+ az rest \
+ --method "GET" \
+ --url "$resourceId/providers/Microsoft.Insights/diagnosticSettings/$diagnosticSettingName" \
+ --url-parameters "api-version=2021-05-01-preview"
+ ```
- ```json
- {
- "id": "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.DocumentDb/databaseAccounts/{ACCOUNT_NAME}/providers/microsoft.insights/diagnosticSettings/{DIAGNOSTIC_SETTING_NAME}",
- "type": "Microsoft.Insights/diagnosticSettings",
- "name": "name",
- "location": null,
- "kind": null,
- "tags": null,
- "properties": {
- "storageAccountId": null,
- "serviceBusRuleId": null,
- "workspaceId": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}",
- "eventHubAuthorizationRuleId": null,
- "eventHubName": null,
- "logs": [
- {
- "category": "DataPlaneRequests",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "QueryRuntimeStatistics",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "PartitionKeyStatistics",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "PartitionKeyRUConsumption",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- },
- {
- "category": "ControlPlaneRequests",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "enabled": false,
- "days": 0
- }
- }
- ],
- "logAnalyticsDestinationType": "Dedicated"
- },
- "identity": null
+### [Bicep](#tab/bicep)
+
+Use an [Bicep template](../azure-resource-manager/bicep/overview.md) to create the diagnostic setting.
+
+1. Ensure you logged in to the Azure CLI. For more information, see [sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+1. Create a new file named `diagnosticSetting.bicep`.
+
+1. Enter the following Bicep template content that deploys the diagnostic setting for your Azure Cosmos DB resource.
+
+ ```bicep
+ @description('The name of the diagnostic setting to create.')
+ param diagnosticSettingName string = 'example-setting'
+
+ @description('The name of the Azure Cosmos DB account to monitor.')
+ param azureCosmosDbAccountName string
+
+ @description('The name of the Azure Monitor Log Analytics workspace to use.')
+ param logAnalyticsWorkspaceName string
+
+ resource azureCosmosDbAccount 'Microsoft.DocumentDB/databaseAccounts@2021-06-15' existing = {
+ name: azureCosmosDbAccountName
}
+
+ resource logAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2023-09-01' existing = {
+ name: logAnalyticsWorkspaceName
+ }
+
+ resource diagnosticSetting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
+ name: diagnosticSettingName
+ scope: azureCosmosDbAccount
+ properties: {
+ workspaceId: logAnalyticsWorkspace.id
+ logAnalyticsDestinationType: 'Dedicated'
+ logs: [
+ {
+ category: 'QueryRuntimeStatistics'
+ enabled: true
+ }
+ ]
+ }
+ }
+ ```
+
+ > [!IMPORTANT]
+ > This sample sets the `logAnalyticsDestinationType` property to `Dedicated` to enable resource-specific tables.
+
+1. Deploy the template using [`az deployment group create`](/cli/azure/deployment/group#az-deployment-group-create).
+
+ ```azurecli-interactive
+ az deployment group create \
+ --resource-group "<resource-group-name>" \
+ --template-file diagnosticSetting.bicep \
+ --parameters \
+ azureCosmosDbAccountName="<azure-cosmos-db-account-name>" \
+ logAnalyticsWorkspaceName="<log-analytics-workspace-name>"
```
+ > [!TIP]
+ > Use the [`az bicep build`](/cli/azure/bicep#az-bicep-build) command to convert the Bicep template to an Azure Resource Manager template.
+ ### [ARM Template](#tab/azure-resource-manager-template)
-Here, use an [Azure Resource Manager (ARM) template](../azure-resource-manager/templates/index.yml) to create a diagnostic setting.
+Use an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to create the diagnostic setting.
-> [!NOTE]
-> Set the **logAnalyticsDestinationType** property to **Dedicated** to enable resource-specific tables.
+1. Ensure you logged in to the Azure CLI. For more information, see [sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
-1. Create the following JSON template file to deploy diagnostic settings for your Azure Cosmos DB resource.
+1. Create a new file named `diagnosticSetting.bicep`.
+
+1. Enter the following Azure Resource Manager template content that deploys the diagnostic setting for your Azure Cosmos DB resource.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "$schema": "<https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#>",
"contentVersion": "1.0.0.0", "parameters": {
- "settingName": {
- "type": "string",
- "metadata": {
- "description": "The name of the diagnostic setting."
- }
- },
- "dbName": {
- "type": "string",
- "metadata": {
- "description": "The name of the database."
- }
- },
- "workspaceId": {
+ "diagnosticSettingName": {
"type": "string",
+ "defaultValue": "example-setting",
"metadata": {
- "description": "The resource Id of the workspace."
+ "description": "The name of the diagnostic setting to create."
} },
- "storageAccountId": {
+ "azureCosmosDbAccountName": {
"type": "string", "metadata": {
- "description": "The resource Id of the storage account."
+ "description": "The name of the Azure Cosmos DB account to monitor."
} },
- "eventHubAuthorizationRuleId": {
+ "logAnalyticsWorkspaceName": {
"type": "string", "metadata": {
- "description": "The resource Id of the event hub authorization rule."
- }
- },
- "eventHubName": {
- "type": "string",
- "metadata": {
- "description": "The name of the event hub."
+ "description": "The name of the Azure Monitor Log Analytics workspace to use."
} } },
Here, use an [Azure Resource Manager (ARM) template](../azure-resource-manager/t
{ "type": "Microsoft.Insights/diagnosticSettings", "apiVersion": "2021-05-01-preview",
- "scope": "[format('Microsoft.DocumentDB/databaseAccounts/{0}', parameters('dbName'))]",
- "name": "[parameters('settingName')]",
+ "scope": "[format('Microsoft.DocumentDB/databaseAccounts/{0}', parameters('azureCosmosDbAccountName'))]",
+ "name": "[parameters('diagnosticSettingName')]",
"properties": {
- "workspaceId": "[parameters('workspaceId')]",
- "storageAccountId": "[parameters('storageAccountId')]",
- "eventHubAuthorizationRuleId": "[parameters('eventHubAuthorizationRuleId')]",
- "eventHubName": "[parameters('eventHubName')]",
- "logAnalyticsDestinationType": "[parameters('logAnalyticsDestinationType')]",
+ "workspaceId": "[resourceId('Microsoft.OperationalInsights/workspaces', parameters('logAnalyticsWorkspaceName'))]",
+ "logAnalyticsDestinationType": "Dedicated",
"logs": [
- {
- "category": "DataPlaneRequests",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- },
- {
- "category": "MongoRequests",
- "categoryGroup": null,
- "enabled": false,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- },
{ "category": "QueryRuntimeStatistics",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- },
- {
- "category": "PartitionKeyStatistics",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- },
- {
- "category": "PartitionKeyRUConsumption",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- },
- {
- "category": "ControlPlaneRequests",
- "categoryGroup": null,
- "enabled": true,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- },
- {
- "category": "CassandraRequests",
- "categoryGroup": null,
- "enabled": false,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- },
- {
- "category": "GremlinRequests",
- "categoryGroup": null,
- "enabled": false,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- },
- {
- "category": "TableApiRequests",
- "categoryGroup": null,
- "enabled": false,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- }
- }
- ],
- "metrics": [
- {
- "timeGrain": null,
- "enabled": false,
- "retentionPolicy": {
- "days": 0,
- "enabled": false
- },
- "category": "Requests"
+ "enabled": true
} ] }
Here, use an [Azure Resource Manager (ARM) template](../azure-resource-manager/t
} ```
-1. Create the following JSON parameter file with settings appropriate for your Azure Cosmos DB resource.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "settingName": {
- "value": "{DIAGNOSTIC_SETTING_NAME}"
- },
- "dbName": {
- "value": "{ACCOUNT_NAME}"
- },
- "workspaceId": {
- "value": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups/{RESOURCE_GROUP}/providers/microsoft.operationalinsights/workspaces/{WORKSPACE_NAME}"
- },
- "storageAccountId": {
- "value": "/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Storage/storageAccounts/{STORAGE_ACCOUNT_NAME}"
- },
- "eventHubAuthorizationRuleId": {
- "value": "/subscriptions/{SUBSCRIPTION_ID}/resourcegroups{RESOURCE_GROUP}/providers/Microsoft.EventHub/namespaces/{EVENTHUB_NAMESPACE}/authorizationrules/{EVENTHUB_POLICY_NAME}"
- },
- "eventHubName": {
- "value": "{EVENTHUB_NAME}"
- },
- "logAnalyticsDestinationType": {
- "value": "Dedicated"
- }
- }
- }
- ```
+ > [!IMPORTANT]
+ > This sample sets the `logAnalyticsDestinationType` property to `Dedicated` to enable resource-specific tables.
1. Deploy the template using [`az deployment group create`](/cli/azure/deployment/group#az-deployment-group-create).
- ```azurecli
+ ```azurecli-interactive
az deployment group create \
- --resource-group <resource-group-name> \
- --template-file <path-to-template>.json \
- --parameters @<parameters-file-name>.json
+ --resource-group "<resource-group-name>" \
+ --template-file azuredeploy.json \
+ --parameters \
+ azureCosmosDbAccountName="<azure-cosmos-db-account-name>" \
+ logAnalyticsWorkspaceName="<log-analytics-workspace-name>"
```
+ > [!TIP]
+ > Use the [`az bicep decompile`](/cli/azure/bicep#az-bicep-decompile) command to convert the Azure Resource Manager template to a Bicep template.
+ ## Enable full-text query for logging query text
-> [!NOTE]
-> Enabling this feature may result in additional logging costs, for pricing details visit [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). It is recommended to disable this feature after troubleshooting.
+Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, you're able to view the deobfuscated query for all requests within your Azure Cosmos DB account. You also give permission for Azure Cosmos DB to access and surface this data in your logs.
-Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabling full-text query, you're able to view the deobfuscated query for all requests within your Azure Cosmos DB account. You also give permission for Azure Cosmos DB to access and surface this data in your logs.
+> [!WARNING]
+> Enabling this feature may result in additional logging costs, for pricing details visit [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). It is recommended to disable this feature after troubleshooting.
### [Azure portal](#tab/azure-portal)
-1. To enable this feature, navigate to the `Features` page in your Azure Cosmos DB account.
-
- :::image type="content" source="media/monitor/full-text-query-features.png" lightbox="media/monitor/full-text-query-features.png" alt-text="Screenshot of the navigation process to the Features page.":::
-
-2. Select `Enable`. This setting is applied within a few minutes. All newly ingested logs have the full-text or PIICommand text for each request.
-
- :::image type="content" source="media/monitor/select-enable-full-text.png" alt-text="Screenshot of the full-text feature being enabled.":::
-
-### [Azure CLI / REST API / ARM template](#tab/azure-cli+rest-api+azure-resource-manager-template)
-
-1. Ensure you're logged in to the Azure CLI. For more information, see [sign in with Azure CLI](/cli/azure/authenticate-azure-cli). Optionally, ensure that you've configured the active subscription for your CLI. For more information, see [change the active Azure CLI subscription](/cli/azure/manage-azure-subscriptions-azure-cli#change-the-active-subscription).
-
-1. Create shell variables for `accountName` and `resourceGroupName`.
-
- ```azurecli
- # Variable for resource group name
- resourceGroupName="<resource-group-name>"
-
- # Variable for account name
- accountName="<account-name>"
- ```
-
-1. Get the unique identifier for your existing account using [`az show`](/cli/azure/cosmosdb#az-cosmosdb-show).
-
- ```azurecli
- az cosmosdb show \
- --resource-group $resourceGroupName \
- --name $accountName \
- --query id
- ```
-
- Store the unique identifier in a shell variable named `$uri`.
+1. On the existing Azure Cosmos DB account page, select the **Features** option within the **Settings** section of the resource menu. Then, select the **Diagnostics full-text query** feature.
- ```azurecli
- uri=$(
- az cosmosdb show \
- --resource-group $resourceGroupName \
- --name $accountName \
- --query id \
- --output tsv
- )
- ```
+ :::image type="content" source="media/monitor-resource-logs/enable-account-features.png" lightbox="media/monitor-resource-logs/enable-account-features.png" alt-text="Screenshot of the available features for an Azure Cosmos DB account.":::
-1. Query the resource using the REST API and [`az rest`](/cli/azure/reference-index#az-rest) with an HTTP `GET` verb to check if full-text query is already enabled.
+2. In the dialog, select `Enable`. This setting is applied within a few minutes. All newly ingested logs now have the full-text or PIICommand text for each request.
- ```azurecli
- az rest \
- --method GET \
- --uri "https://management.azure.com/$uri/?api-version=2021-05-01-preview" \
- --query "{accountName:name,fullTextQuery:{state:properties.diagnosticLogSettings.enableFullTextQuery}}"
- ```
+ :::image type="content" source="media/monitor-resource-logs/enable-diagnostics-full-text-query.png" alt-text="Screenshot of the diagnostics full-text query feature being enabled for an Azure Cosmos DB account.":::
- If full-text query isn't enabled, the output would be similar to this example.
+### [Azure CLI / REST API / Bicep / ARM Template](#tab/azure-cli+rest-api+bicep+azure-resource-manager-template)
- ```json
- {
- "accountName": "<account-name>",
- "fullTextQuery": {
- "state": "None"
- }
- }
- ```
+Use the Azure CLI to enable full-text query for your Azure Cosmos DB account.
-1. If full-text query isn't already enabled, enable it using `az rest` again with an HTTP `PATCH` verb and a JSON payload.
+1. Enable full-text query using `az rest` again with an HTTP `PATCH` verb and a JSON payload.
- ```azurecli
+ ```azurecli-interactive
az rest \
- --method PATCH \
- --uri "https://management.azure.com/$uri/?api-version=2021-05-01-preview" \
- --body '{"properties": {"diagnosticLogSettings": {"enableFullTextQuery": "True"}}}'
+ --method "PATCH" \
+ --url $(az cosmosdb show \
+ --resource-group "<resource-group-name>" \
+ --name "<account-name>" \
+ --query "id" \
+ --output "tsv" \
+ ) \
+ --url-parameters "api-version=2021-05-01-preview" \
+ --body '{
+ "properties": {
+ "diagnosticLogSettings": {
+ "enableFullTextQuery": "True"
+ }
+ }
+ }'
```
- > [!NOTE]
- > If you are using Azure CLI within a PowerShell prompt, you will need to escape the double-quotes using a backslash (`\`) character.
-
-1. Wait a few minutes for the operation to complete. Check the status of full-text query by using `az rest` again.
+1. Wait a few minutes for the operation to complete. Check the status of full-text query by using `az rest` again with HTTP `GET`.
- ```azurecli
+ ```azurecli-interactive
az rest \
- --method GET \
- --uri "https://management.azure.com/$uri/?api-version=2021-05-01-preview" \
- --query "{accountName:name,fullTextQuery:{state:properties.diagnosticLogSettings.enableFullTextQuery}}"
+ --method "GET" \
+ --url $(az cosmosdb show \
+ --resource-group "<resource-group-name>" \
+ --name "<account-name>" \
+ --query "id" \
+ --output "tsv" \
+ ) \
+ --url-parameters "api-version=2021-05-01-preview" \
+ --query "{accountName:name,fullTextQueryEnabled:properties.diagnosticLogSettings.enableFullTextQuery}"
``` The output should be similar to this example.
Azure Cosmos DB provides advanced logging for detailed troubleshooting. By enabl
```json { "accountName": "<account-name>",
- "fullTextQuery": {
- "state": "True"
- }
+ "fullTextQueryEnabled": "True"
} ```
-## Query data
-
-To learn how to query using these newly enabled features, see:
--- [API for NoSQL](nosql/diagnostic-queries.md)-- [API for MongoDB](mongodb/diagnostic-queries.md)-- [API for Apache Cassandra](cassandr)-- [API for Apache Gremlin](gremlin/diagnostic-queries.md)-
-## Next steps
+## Related content
-> [!div class="nextstepaction"]
-> [Monitoring Azure Cosmos DB data reference](monitor-reference.md#resource-logs)
+- [Diagnostic queries in API for NoSQL](nosql/diagnostic-queries.md)
+- [Diagnostic queries in API for MongoDB](mongodb/diagnostic-queries.md)
+- [Diagnostic queries in API for Apache Cassandra](cassandr)
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-consistency.md
This article explains how to manage consistency levels in Azure Cosmos DB. You l
As you change your account level consistency, ensure you redeploy your applications and make any necessary code modifications to apply these changes. ## Configure the default consistency level
cosmos-db Manage With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-powershell.md
The following guide describes how to use PowerShell to script and automate manag
For cross-platform management of Azure Cosmos DB, you can use the `Az` and `Az.CosmosDB` cmdlets with [cross-platform PowerShell](/powershell/scripting/install/installing-powershell), as well as the [Azure CLI](manage-with-cli.md), the [REST API][rp-rest-api], or the [Azure portal](how-to-create-account.md). ## Getting Started
cosmos-db Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-portal.md
This quickstart demonstrates how to use the Azure portal to create an Azure Cosm
An Azure subscription or free Azure Cosmos DB trial account. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cosmos-db Quickstart Template Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-bicep.md
Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying a Bicep file to create an Azure Cosmos DB database and a container within that database. You can later store data in this container. ## Prerequisites An Azure subscription or free Azure Cosmos DB trial account. -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
## Review the Bicep file
cosmos-db Quickstart Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-json.md
Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create an Azure Cosmos DB database and a container within that database. You can later store data in this container. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
If your environment meets the prerequisites and you're familiar with using ARM t
An Azure subscription or free Azure Cosmos DB trial account -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cosmos-db Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-terraform.md
Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scal
An Azure subscription or free Azure Cosmos DB trial account -- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
Terraform should be installed on your local computer. Installation instructions can be found [here](https://learn.hashicorp.com/tutorials/terraform/install-cli).
cosmos-db Samples Java Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-java-spring-data.md
> > [!IMPORTANT]
->[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+>[!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
> >- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services. >
cosmos-db Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-java.md
> > [!IMPORTANT]
->[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+>[!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
> >- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services. >
cosmos-db Samples Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-nodejs.md
Sample solutions that perform CRUD operations and other common operations on Azu
## Prerequisites - You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services.
cosmos-db Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/quickstart-create-bicep.md
Last updated 09/07/2023
Azure Cosmos DB for PostgreSQL is a managed service that allows you to run horizontally scalable PostgreSQL databases in the cloud. In this article you learn, using Bicep to provision and manage an Azure Cosmos DB for PostgreSQL cluster. ## Prerequisites
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
The script in this article creates an Azure Cosmos DB for Apache Cassandra accou
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- This script requires Azure CLI version 2.12.1 or later.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
Last updated 02/21/2022
The script in this article demonstrates creating an Azure Cosmos DB account, keyspace, and table for API for Cassandra. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating an Azure Cosmos DB account, key
## Sample script ### Run the script
The script in this article demonstrates creating an Azure Cosmos DB account, key
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
The script in this article demonstrates preventing resources from being deleted
> > Resource locks do not work for changes made by users connecting using any Cassandra SDK, CQL Shell, or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates preventing resources from being deleted
## Sample script ### Run the script
The script in this article demonstrates preventing resources from being deleted
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
Last updated 02/21/2022
The script in this article demonstrates creating a serverless Azure Cosmos DB account, keyspace, and table for API for Cassandra. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating a serverless Azure Cosmos DB ac
## Sample script ### Run the script
The script in this article demonstrates creating a serverless Azure Cosmos DB ac
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
Last updated 02/21/2022
The script in this article creates a Cassandra keyspace with shared throughput and a Cassandra table with dedicated throughput, then updates the throughput for both the keyspace and table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article creates a Cassandra keyspace with shared throughput a
## Sample script ### Run the script
The script in this article creates a Cassandra keyspace with shared throughput a
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/free-tier.md
The script in this article demonstrates how to locate an Azure Cosmos DB free-ti
Each Azure subscription can have up to one Azure Cosmos DB free-tier account. If you're trying to create a free-tier account, the option may be disabled in the Azure portal, or you get an error when attempting to create a free-tier account. If either of these issues occur, use this script to locate the name of the existing free-tier account, and the resource group it belongs to. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
Each Azure subscription can have up to one Azure Cosmos DB free-tier account. If
## Sample script ### Run the script
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
Last updated 02/21/2022
The script in this article demonstrates creating an Azure Cosmos DB account with default values and IP Firewall enabled. It uses a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB. To use this sample for other APIs, apply the `ip-range-filter` parameter in the script to the `az cosmosdb account create` command for your API specific script. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating an Azure Cosmos DB account with
## Sample script ### Run the script
The script in this article demonstrates creating an Azure Cosmos DB account with
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
The script in this article demonstrates four operations.
This script uses a API for NoSQL account, but these operations are identical across all database APIs in Azure Cosmos DB. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates four operations.
## Sample script ### Run the script
The script in this article demonstrates four operations.
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
This script uses a API for NoSQL account, but these operations are identical acr
> [!IMPORTANT] > Add and remove region operations on an Azure Cosmos DB account cannot be done while changing other properties. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
This script uses a API for NoSQL account, but these operations are identical acr
## Sample script ### Run the script
This script uses a API for NoSQL account, but these operations are identical acr
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
The script in this article demonstrates connecting an existing Azure Cosmos DB a
This script uses a API for NoSQL account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
This script uses a API for NoSQL account. To use this sample for other APIs, app
## Sample script ### Run the script
This script uses a API for NoSQL account. To use this sample for other APIs, app
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
The script in this article creates a new virtual network with a front and back e
This script uses a API for NoSQL account. To use this sample for other APIs, apply the `enable-virtual-network` and `virtual-network-rules` parameters in the script below to your API specific script. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
This script uses a API for NoSQL account. To use this sample for other APIs, app
## Sample script ### Run the script
This script uses a API for NoSQL account. To use this sample for other APIs, app
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
The script in this article creates an Azure Cosmos DB for Gremlin account, datab
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- This script requires Azure CLI version 2.30 or later.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
Last updated 02/21/2022
The script in this article demonstrates creating a Gremlin database and graph. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating a Gremlin database and graph.
## Sample script ### Run the script
The script in this article demonstrates creating a Gremlin database and graph.
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
The script in this article demonstrates performing resource lock operations for
> > Resource locks do not work for changes made by users connecting using any Gremlin SDK or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates performing resource lock operations for
## Sample script ### Run the script
The script in this article demonstrates performing resource lock operations for
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
The script in this article creates an Azure Cosmos DB for Gremlin serverless acc
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- This script requires Azure CLI version 2.30 or later.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
Last updated 02/21/2022
# Throughput (RU/s) operations with Azure CLI for a database or graph for Azure Cosmos DB - API for Gremlin The script in this article creates a Gremlin database with shared throughput and a Gremlin graph with dedicated throughput, then updates the throughput for both the database and graph. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated.
The script in this article creates a Gremlin database with shared throughput and
## Sample script ### Run the script
The script in this article creates a Gremlin database with shared throughput and
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
Last updated 02/21/2022
The script in this article demonstrates creating a API for MongoDB database with autoscale and 2 collections that share throughput. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating a API for MongoDB database with
## Sample script ### Run the script
The script in this article demonstrates creating a API for MongoDB database with
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
Last updated 02/21/2022
The script in this article demonstrates creating a API for MongoDB database and collection. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating a API for MongoDB database and
## Sample script ### Run the script
The script in this article demonstrates creating a API for MongoDB database and
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
The script in this article demonstrates performing resource lock operations for
> > Resource locks do not work for changes made by users connecting using any MongoDB SDK, Mongoshell, any tools or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates performing resource lock operations for
## Sample script ### Run the script
The script in this article demonstrates performing resource lock operations for
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
Last updated 02/21/2022
The script in this article demonstrates creating a API for MongoDB serverless account database and collection. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating a API for MongoDB serverless ac
## Sample script ### Run the script
The script in this article demonstrates creating a API for MongoDB serverless ac
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
Last updated 02/21/2022
The script in this article creates a MongoDB database with shared throughput and collection with dedicated throughput, then updates the throughput for both. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article creates a MongoDB database with shared throughput and
## Sample script ### Run the script
The script in this article creates a MongoDB database with shared throughput and
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/autoscale.md
The script in this article creates an Azure Cosmos DB for NoSQL account, databas
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- This script requires Azure CLI version 2.0.73 or later.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/create.md
Last updated 02/21/2022
The script in this article demonstrates creating a API for NoSQL database and container. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating a API for NoSQL database and co
## Sample script ### Run the script
The script in this article demonstrates creating a API for NoSQL database and co
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/lock.md
The script in this article demonstrates performing resource lock operations for
> > Resource locks do not work for changes made by users connecting using any Azure Cosmos DB SDK, any tools that connect via account keys, or the Azure Portal unless the Azure Cosmos DB account is first locked with the `disableKeyBasedMetadataWriteAccess` property enabled. To learn more about how to enable this property see, [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates performing resource lock operations for
## Sample script ### Run the script
The script in this article demonstrates performing resource lock operations for
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/serverless.md
Last updated 02/21/2022
The script in this article demonstrates creating a API for NoSQL serverless account with database and container. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating a API for NoSQL serverless acco
## Sample script ### Run the script
The script in this article demonstrates creating a API for NoSQL serverless acco
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/throughput.md
Last updated 02/21/2022
The script in this article creates a API for NoSQL database with shared throughput and a API for NoSQL container with dedicated throughput, then updates the throughput for both the database and container. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article creates a API for NoSQL database with shared throughp
## Sample script ### Run the script
The script in this article creates a API for NoSQL database with shared throughp
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
The script in this article creates an Azure Cosmos DB for Table account and tabl
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- This script requires Azure CLI version 2.12.1 or later.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
Last updated 02/21/2022
The script in this article demonstrates creating a API for Table table. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article demonstrates creating a API for Table table.
## Sample script ### Run the script
The script in this article demonstrates creating a API for Table table.
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
The script in this article demonstrates performing resource lock operations for
## Prerequisites -- You need an [Azure Cosmos DB for Table account, database, and table created](create.md). [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+- You need an [Azure Cosmos DB for Table account, database, and table created](create.md). [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
> [!IMPORTANT] > To create or delete resource locks, you must have the **Owner** role in your Azure subscription.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
The script in this article creates an Azure Cosmos DB for Table serverless accou
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- This script requires Azure CLI version 2.12.1 or later.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
Last updated 02/21/2022
The script in this article creates a API for Table table then updates the throughput the table. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The script in this article creates a API for Table table then updates the throug
## Sample script ### Run the script
The script in this article creates a API for Table table then updates the throug
## Clean up resources ```azurecli az group delete --name $resourceGroup
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/autoscale.md
# Create a keyspace and table with autoscale for Azure Cosmos DB - API for Cassandra [!INCLUDE[Cassandra](../../../includes/appliesto-cassandra.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/create.md
# Create a keyspace and table for Azure Cosmos DB - API for Cassandra [!INCLUDE[Cassandra](../../../includes/appliesto-cassandra.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/list-get.md
# List and get keyspaces and tables for Azure Cosmos DB - API for Cassandra [!INCLUDE[Cassandra](../../../includes/appliesto-cassandra.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/lock.md
# Create a resource lock for Azure Cosmos DB Cassandra API keyspace and table using Azure PowerShell [!INCLUDE[Cassandra](../../../includes/appliesto-cassandra.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/cassandra/throughput.md
# Throughput (RU/s) operations with PowerShell for a keyspace or table for Azure Cosmos DB - API for Cassandra [!INCLUDE[Cassandra](../../../includes/appliesto-cassandra.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Account Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/account-update.md
# Update consistency level for an Azure Cosmos DB account with PowerShell [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](../../../includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Failover Priority Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/failover-priority-update.md
# Change failover priority or trigger failover for an Azure Cosmos DB account with single write region by using PowerShell [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](../../../includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Firewall Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/firewall-create.md
# Create an Azure Cosmos DB account with IP Firewall [!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](../../../includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Keys Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/keys-connection-strings.md
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](../../../includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)] This sample requires the Az PowerShell module 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Update Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/common/update-region.md
This PowerShell script updates the Azure regions that an Azure Cosmos DB account uses. You can use this script to add an Azure region or change region failover order. ## Prerequisites
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/autoscale.md
# Create a database and graph with autoscale for Azure Cosmos DB - API for Gremlin [!INCLUDE[Gremlin](../../../includes/appliesto-gremlin.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/create.md
# Create a database and graph for Azure Cosmos DB - API for Gremlin [!INCLUDE[Gremlin](../../../includes/appliesto-gremlin.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/list-get.md
This PowerShell script lists or gets specific Azure Cosmos DB accounts, API for Gremlin databases, and API for Gremlin graphs. ## Prerequisites
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/lock.md
# Create a resource lock for Azure Cosmos DB for Gremlin database and graph using Azure PowerShell [!INCLUDE[Gremlin](../../../includes/appliesto-gremlin.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/gremlin/throughput.md
# Throughput (RU/s) operations with PowerShell for a database or graph for Azure Cosmos DB - API for Gremlin [!INCLUDE[Gremlin](../../../includes/appliesto-gremlin.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/autoscale.md
# Create a database and collection with autoscale for Azure Cosmos DB - API for MongoDB [!INCLUDE[MongoDB](~/reusable-content/ce-skilling/azure/includes/cosmos-db/includes/appliesto-mongodb.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/create.md
# Create a database and collection for Azure Cosmos DB - API for MongoDB [!INCLUDE[MongoDB](~/reusable-content/ce-skilling/azure/includes/cosmos-db/includes/appliesto-mongodb.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/list-get.md
# List and get databases and graphs for Azure Cosmos DB - API for MongoDB [!INCLUDE[MongoDB](~/reusable-content/ce-skilling/azure/includes/cosmos-db/includes/appliesto-mongodb.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/lock.md
# Create a resource lock for Azure Cosmos DB MongoDB API database and collection using Azure PowerShell [!INCLUDE[MongoDB](~/reusable-content/ce-skilling/azure/includes/cosmos-db/includes/appliesto-mongodb.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/mongodb/throughput.md
# Throughput (RU/s) operations with PowerShell for a database or collection for Azure Cosmos DB for MongoDB [!INCLUDE[MongoDB](~/reusable-content/ce-skilling/azure/includes/cosmos-db/includes/appliesto-mongodb.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/autoscale.md
# Create a database and container with autoscale for Azure Cosmos DB - API for NoSQL [!INCLUDE[NoSQL](../../../includes/appliesto-nosql.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Create Index None https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create-index-none.md
# Create a container with indexing turned off in an Azure Cosmos DB account using PowerShell [!INCLUDE[NoSQL](../../../includes/appliesto-nosql.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Create Large Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create-large-partition-key.md
# Create a container with a large partition key in an Azure Cosmos DB account using PowerShell [!INCLUDE[NoSQL](../../../includes/appliesto-nosql.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/create.md
# Create a database and container for Azure Cosmos DB - API for NoSQL [!INCLUDE[NoSQL](../../../includes/appliesto-nosql.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/list-get.md
# List and get databases and containers for Azure Cosmos DB - API for NoSQL [!INCLUDE[NoSQL](../../../includes/appliesto-nosql.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/lock.md
# Create a resource lock for Azure Cosmos DB for NoSQL database and container using Azure PowerShell [!INCLUDE[NoSQL](../../../includes/appliesto-nosql.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/nosql/throughput.md
# Throughput (RU/s) operations with PowerShell for a database or container for Azure Cosmos DB for NoSQL [!INCLUDE[NoSQL](../../../includes/appliesto-nosql.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/autoscale.md
# Create a table with autoscale for Azure Cosmos DB - API for Table [!INCLUDE[Table](../../../includes/appliesto-table.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/create.md
# Create a table for Azure Cosmos DB - API for Table [!INCLUDE[Table](../../../includes/appliesto-table.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db List Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/list-get.md
# List and get tables for Azure Cosmos DB - API for Table [!INCLUDE[Table](../../../includes/appliesto-table.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/lock.md
# Create a resource lock for Azure Cosmos DB Table API table using Azure PowerShell [!INCLUDE[Table](../../../includes/appliesto-table.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/powershell/table/throughput.md
# Throughput (RU/s) operations with PowerShell for a table for Azure Cosmos DB - API for Table [!INCLUDE[Table](../../../includes/appliesto-table.md)] This sample requires Azure PowerShell Az 5.4.0 or later. Run `Get-Module -ListAvailable Az` to see which versions are installed. If you need to install, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/secure-access-to-data.md
As a database service, Azure Cosmos DB enables you to search, select, modify, an
Each multi-model API (SQL, MongoDB, Gremlin, Cassandra, or Table) provides different language SDKs that contain methods to search and delete data based on custom predicates. You can also enable the [time to live (TTL)](time-to-live.md) feature to delete data automatically after a specified period, without incurring any more cost. ## Next steps
cosmos-db How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-java.md
catch (Exception e)
} ``` ## Next steps
cost-management-billing Understand Usage Details Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/understand-usage-details-fields.md
MPA accounts have all MCA terms, in addition to the MPA terms, as described in t
| Provider | MCA | Identifier for product category or Line of Business. For example, Azure, Microsoft 365, and AWS⁴. | | PublisherId | MCA | The ID of the publisher. It's only available after the invoice is generated. | | PublisherName | All | The name of the publisher. For first-party services, the value should be listed as `Microsoft` or `Microsoft Corporation`. |
-| PublisherType | All | Supported values: **Microsoft**, **Azure**, **AWS**⁴, **Marketplace**. Values are `Microsoft` for MCA accounts and `Azure` for EA and pay-as-you-go accounts. |
+| PublisherType | All |Supported values: **Microsoft**, **Azure**, **AWS**⁴, **Marketplace**. For MCA accounts, the value can be `Microsoft` for first party charges and `Marketplace` for third party charges. For EA and pay-as-you-go accounts, the value will be `Azure`. |
| Quantity┬│ | All | The number of units used by the given product or service for a given day. | | ResellerName | MPA | The name of the reseller associated with the subscription. | | ResellerMpnId | MPA | ID for the reseller associated with the subscription. |
cost-management-billing Allocate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/allocate-costs.md
Allocated costs appear in cost analysis. They appear as other items associated w
## Prerequisites - Cost allocation currently only supports customers with:
- - A [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) (MCA) in the Enterprise motion where you buy Azure services through a Microsoft representative. Also called an MCA enterprise agreement.
- - A [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) that you bought through the Azure website. Also called an MCA individual agreement.
+ - A [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) (MCA) in the Enterprise motion where you buy Azure services through a Microsoft representative. Also called an MCA-E agreement.
+ - A [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) that you bought through the Azure website. Also called an MCA-online agreement.
- An [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/). - To create or manage a cost allocation rule, you must use an Enterprise Administrator account for [Enterprise Agreements](../manage/understand-ea-roles.md). Or you must be a [Billing account](../manage/understand-mca-roles.md) owner for Microsoft Customer Agreements.
cost-management-billing Pricing Calculator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/pricing-calculator.md
The Azure pricing calculator helps you turn anticipated usage into an estimated cost, which makes it easier to plan and budget for your Azure usage. Whether you're a small business owner or an enterprise-level organization, the web-based tool helps you make informed decisions about your cloud spending. When you log in, the calculator also provides a cost estimate for your Azure consumption with your negotiated or discounted prices. This article explains how to use the Azure pricing calculator. >[!NOTE]
-> Prices shown in this article are examples to help you understand how the calculator works. They are not actual prices.
+> - You can also use Azure [Total Cost of Ownership (TCO) Calculator](https://azure.microsoft.com/pricing/tco/calculator/) to estimate the cost savings you can achieve by migrating your application workloads to Microsoft Azure.
+> - Prices shown in this article are examples to help you understand how the calculator works. They are not actual prices.
## Access the Azure pricing calculator
cost-management-billing Quick Create Budget Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-bicep.md
Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs and monitor how spending progresses over time. When the budget thresholds you've created are exceeded, notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. This quickstart shows you how to create a budget named 'MyBudget' using Bicep. ## Prerequisites
cost-management-billing Quick Create Budget Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-template.md
Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. When the budget thresholds you've created are exceeded, notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. This quickstart shows you how to create a budget using three different Azure Resource Manager templates (ARM template). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button for one of the following templates. The template will open in the Azure portal.
cost-management-billing Reservation Utilization Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/reservation-utilization-alerts.md
You can create a reservation utilization alert rule at any of the following scop
| Supported agreement | Alert rule scope | Required role | Supported actions | | | | | | | Enterprise Agreement | Billing account | Enterprise admin, enterprise read only| Create, read, update, delete |
-|ΓÇó Microsoft Customer Agreement (MCA) in the Enterprise motion where you buy Azure services through a Microsoft representative. Also called an MCA enterprise agreement.<br><br>ΓÇó Microsoft Customer Agreement (MCA) that you bought through the Azure website. Also called an MCA individual agreement. | Billing profile |Billing profile owner, billing profile contributor, billing profile reader, and invoice manager | Create, read, update, delete|
+|ΓÇó Microsoft Customer Agreement (MCA) in the Enterprise motion where you buy Azure services through a Microsoft representative. Also called an MCA-E agreement.<br><br>ΓÇó Microsoft Customer Agreement (MCA) that you bought through the Azure website. Also called an MCA-online agreement. | Billing profile |Billing profile owner, billing profile contributor, billing profile reader, and invoice manager | Create, read, update, delete|
| Microsoft Partner Agreement (MPA) | Customer scope | Global admin, admin agent | Create, read, update, delete | For more information, see [scopes and roles](understand-work-scopes.md).
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
When you create an export programmatically, you must manually register the `Micr
Start by preparing your environment for Azure PowerShell: > [!IMPORTANT] > While the **Az.CostManagement** PowerShell module is in preview, you must install it separately
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
Only certain roles have permission to get billing invoice, like the Account Admi
If you have a Microsoft Customer Agreement, you must be a billing profile Owner, Contributor, Reader, or Invoice manager to view billing information. To learn more about billing roles for Microsoft Customer Agreements, see [Billing profile roles and tasks](understand-mca-roles.md#billing-profile-roles-and-tasks). ## Download your Azure invoices (.pdf)
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md
As of August 1, 2019, new opt-out forms aren't accepted for Azure commercial cus
**Transferred** - Transferred status is applied to enrollments that have their associated accounts and services transferred to a new enrollment. Enrollments don't automatically transfer if a new enrollment number is generated during renewal. The prior enrollment number must be included in the customer's renewal request for an automatic transfer.
-**Manually Terminated** - All the subscriptions and accounts under the enrollment are deactivated. Reactivation isn't supported for terminated enrollments. For direct EA, only a non-read-only enterprise administrator can request reactivation with a support request. For indirect EA, the partner can submit a request in the Volume Licensing Center. However, to terminate enrollments with Expired status, the partner must request it using Azure support.
+**Manually Terminated** - All the subscriptions and accounts under the enrollment are deactivated. Reactivation isn't supported for terminated enrollments. For direct EA, only a non-read-only enterprise administrator can request termination with a support request. For indirect EA, the partner can submit a request in the Volume Licensing Center. However, to terminate enrollments with Expired status, the partner must request it using Azure support.
## Partner markup
cost-management-billing Grant Access To Create Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/grant-access-to-create-subscription.md
As an Azure customer with an [Enterprise Agreement (EA)](https://azure.microsoft
> - Unless you have a specific need to use the legacy APIs, you should use the information for the [latest GA version](programmatically-create-subscription-enterprise-agreement.md) about the latest API version. **See [Enrollment Account Role Assignments - Put](/rest/api/billing/2019-10-01-preview/enrollment-account-role-assignments/put) to grant permission to create EA subscriptions with the latest API**. > - If you're migrating to use the newer APIs, you must grant owner permissions again using [2019-10-01-preview](/rest/api/billing/2019-10-01-preview/enrollment-account-role-assignments/put). Your previous configuration that uses the following APIs doesn't automatically convert for use with newer APIs. ## Grant access
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
Previously updated : 03/26/2024 Last updated : 06/26/2024
Before you transfer billing products, read [Supplemental information about trans
>[!IMPORTANT] > - When you have a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency, you can't transfer it. Instead you must use it in the original enrollment. However, you change the scope of the savings plan so that is used by other subscriptions. For more information, see [Change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope). You can view your billing currency in the Azure portal on the enrollment properties page. For more information, see [To view enrollment properties](direct-ea-administration.md#to-view-enrollment-properties). > - When you transfer subscriptions, cost and usage data for your Azure products aren't accessible after the transfer. We recommend that you [download your cost and usage data](../understand/download-azure-daily-usage.md) and invoices before you transfer subscriptions.
-> - When there's is a currency change during or after an EA enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly, not up front, reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](ea-transfers.md#prerequisites-1).
+> - When there's is a currency change during or after transfer, reservations paid for monthly are canceled. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly, not up front, reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](ea-transfers.md#prerequisites-1).
Before you begin, make sure that the people involved in the product transfer have the required permissions.
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
In this article, you learn how to create subscriptions programmatically using Az
When you create an Azure subscription programmatically, it falls under the terms of the agreement where you receive Azure services from Microsoft or a certified seller. For more information, see [Microsoft Azure Legal Information](https://azure.microsoft.com/support/legal/). You can't create support plans programmatically. You can buy a new support plan or upgrade one in the Azure portal. Navigate to **Help + support** and then at the top of the page, select **Choose the right support plan**.
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
If you need to create an Azure MCA subscription across Microsoft Entra tenants,
When you create an Azure subscription programmatically, that subscription is governed by the agreement under which you obtained Azure services from Microsoft or an authorized reseller. For more information, see [Microsoft Azure Legal Information](https://azure.microsoft.com/support/legal/). You can't create support plans programmatically. You can buy a new support plan or upgrade one in the Azure portal. Navigate to **Help + support** and then at the top of the page, select **Choose the right support plan**.
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
In this article, you learn how to create subscriptions programmatically using Az
When you create an Azure subscription programmatically, that subscription is governed by the agreement under which you obtained Azure services from Microsoft or an authorized reseller. For more information, see [Microsoft Azure Legal Information](https://azure.microsoft.com/support/legal/). You can't create support plans programmatically. You can buy a new support plan or upgrade one in the Azure portal. Navigate to **Help + support** and then at the top of the page, select **Choose the right support plan**.
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
Azure customers with a billing account for the following agreement types can cre
When you create an Azure subscription programmatically, the subscription is governed by the agreement under which you obtained Azure services from Microsoft or an authorized reseller. For more information, see [Microsoft Azure Legal Information](https://azure.microsoft.com/support/legal/). You can't create support plans programmatically. You can buy a new support plan or upgrade one in the Azure portal. Navigate to **Help + support** and then at the top of the page, select **Choose the right support plan**.
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Previously updated : 06/18/2024 Last updated : 06/27/2024 # customer intent: As a billing administrator, I want to learn about transferring subscriptions so that I can transfer one.
As you begin to plan your product transfer, consider the information needed to a
- What's the product's current offer type and what do you want to transfer it to? - Microsoft Online Service Program (MOSP), also known as pay-as-you-go (PAYG) - Previous Azure offer in CSP
- - New Azure offer in CSP, also referred to as Azure Plan with a Microsoft Partner Agreement (MPA)
- Enterprise Agreement (EA)
- - Microsoft Customer Agreement (MCA) in the Enterprise motion where you buy Azure services through a Microsoft representative. Also called an MCA enterprise agreement.
- - Microsoft Customer Agreement (MCA) that you bought through the Azure website. Also called an MCA individual agreement.
+ - Microsoft Customer Agreement in the Enterprise motion (MCA-E) where you buy Azure services through a Microsoft representative. Also called an MCA enterprise agreement.
+ - Microsoft Customer Agreement that you bought through the Azure website (MCA-online).
+ - Cloud Solution Provider - CSP (MCA managed by partner)
- Others like MSDN, EOPEN, Azure Pass, and Free Trial - Do you have the required permissions on the product to accomplish a transfer? Specific permission needed for each transfer type is listed in the following product transfer support table. - Only the billing administrator of an account can transfer subscription ownership.
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| Source (current) product agreement type | Destination (future) product agreement type | Notes | | | | | | EA | MOSP (PAYG) | ΓÇó Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
-| EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers with no currency change are supported. <br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. However, you can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
+| EA | MCA-online | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers with no currency change are supported. <br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. However, you can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
| EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans automatically get transferred during EA to EA transfers, except in transfers with a currency change.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change Azure subscription or account ownership](direct-ea-administration.md#change-azure-subscription-or-account-ownership). |
-| EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products but not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation transfers with no currency change are supported. When there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](../manage/ea-transfers.md#prerequisites-1).<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
-| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers that accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.yml). |
-| MCA - individual | MOSP (PAYG) | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
-| MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
-| MCA - individual | EA | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
-| MCA - individual | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br>ΓÇó Self-service reservation and savings plan transfers are supported. |
-| MCA - individual | MPA | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
-| MCA - Enterprise | EA | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
-| MCA - Enterprise | MOSP | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
-| MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
-| MCA - Enterprise | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
-| MCA - Enterprise | MPA | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers that accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation and savings plan transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.yml#transfer-ea-or-microsoft-customer-agreement-(mca)-enterprise-subscriptions-to-a-csp-partner). |
+| EA | MCA-E | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products but not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation transfers with no currency change are supported. When there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](../manage/ea-transfers.md#prerequisites-1).<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
+| EA | CSP (MCA managed by partner) | ΓÇó Transfer is only allowed for direct EA to CSP (MCA managed by partner). A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers that accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to CSP (MCA managed by partner) isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.yml). |
+| MCA-online | MOSP (PAYG) | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
+| MCA-online | MCA-online | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
+| MCA-online | EA | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
+| MCA-online | MCA-E | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br>ΓÇó Self-service reservation and savings plan transfers are supported. |
+| MCA-online | CSP (MCA managed by partner) | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
+| MCA-E | EA | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
+| MCA-E | MOSP | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
+| MCA-E | MCA-online | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
+| MCA-E | MCA-E | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
+| MCA-E | CSP (MCA managed by partner) | ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Microsoft Customer Agreement with a Microsoft representative. For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers that accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Self-service reservation and savings plan transfers are supported.<br><br> ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.yml#transfer-ea-or-microsoft-customer-agreement-(mca)-enterprise-subscriptions-to-a-csp-partner). |
| Previous Azure offer in CSP | Previous Azure offer in CSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations don't automatically transfer and transferring them isn't supported. |
-| Previous Azure offer in CSP | MPA | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). |
-| MPA | EA | ΓÇó Automatic transfer isn't supported. Any transfer requires resources to move from the existing MPA product manually to a newly created or an existing EA product.<br><br> ΓÇó Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <br><br> ΓÇó Reservations and savings plan don't automatically transfer and transferring them isn't supported. |
-| MPA | MCA - individual | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
-| MPA | MPA | ΓÇó For details, see [Transfer a customer's Azure subscriptions and/or Reservations (under an Azure plan) to a different CSP](/partner-center/transfer-azure-subscriptions-under-azure-plan). |
+| Previous Azure offer in CSP | CSP (MCA managed by partner) | For details, see [Transfer a customer's Azure subscriptions to a different CSP (under an Azure plan)](/partner-center/transfer-azure-subscriptions-under-azure-plan). |
+| CSP (MCA managed by partner) | EA | ΓÇó Automatic transfer isn't supported. Any transfer requires resources to move from the existing CSP (MCA managed by partner) product manually to a newly created or an existing EA product.<br><br> ΓÇó Use the information in the [Perform resource transfers](#perform-resource-transfers) section. <br><br> ΓÇó Reservations and savings plan don't automatically transfer and transferring them isn't supported. |
+| CSP (MCA managed by partner) | MCA-online | ΓÇó Microsoft doesn't support the transfer, so you must move resources yourself. For more information, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
+| CSP (MCA managed by partner) | CSP (MCA managed by partner) | ΓÇó For details, see [Transfer a customer's Azure subscriptions and/or Reservations (under an Azure plan) to a different CSP](/partner-center/transfer-azure-subscriptions-under-azure-plan). |
| MOSP (PAYG) | MOSP (PAYG) | ΓÇó If you're changing the billing owner of the subscription, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations don't automatically transfer so you must open a [billing support ticket](https://azure.microsoft.com/support/create-ticket/) to transfer them. |
-| MOSP (PAYG) | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MOSP (PAYG) | MCA-online | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
| MOSP (PAYG) | EA | ΓÇó If you're transferring the admin account to the EA enrollment, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó If you're transferring subscriptions to the EA enrollment, you must create a [billing support ticket](https://azure.microsoft.com/support/create-ticket/). |
-| MOSP (PAYG) | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
+| MOSP (PAYG) | MCA-E | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation transfers are supported. |
## Perform resource transfers
cost-management-billing Manage Reserved Vm Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/manage-reserved-vm-instance.md
If you bought Azure Reserved Virtual Machine Instances, you can change the optim
*Permission needed to manage a reservation is separate from subscription permission.* ## Reservation Order and Reservation
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-reservations.md
This article explains how reservation permissions work and how users can view and manage Azure reservations in the Azure portal and with Azure PowerShell. ## Who can manage a reservation by default
cost-management-billing Mca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-overview.md
Previously updated : 08/29/2023 Last updated : 06/27/2024 # Get started with your Microsoft Customer Agreement billing account
-A billing account is created when you sign up to use Azure. You use your billing account to manage invoices, payments, and track costs. You can have access to multiple billing accounts. For example, you might have signed up for Azure for your personal projects. You could also have access to Azure through your organization's Enterprise Agreement or Microsoft Customer Agreement. For each of these scenarios, you would have a separate billing account.
+A billing account is created when you sign up to use Azure. You use your billing account to manage invoices, payments, and track costs. You can have access to multiple billing accounts. For example, you signed up for Azure for your personal projects. You could also have access to Azure through your organization's Enterprise Agreement or Microsoft Customer Agreement. For each of these scenarios, you would have a separate billing account.
This article applies to a billing account for a Microsoft Customer Agreement. [Check if you have access to a Microsoft Customer Agreement](#check-access-to-a-microsoft-customer-agreement).
Roles on the billing account have the highest level of permissions. By default,
Use a billing profile to manage your invoice and payment methods. A monthly invoice is generated at the beginning of the month for each billing profile in your account. The invoice contains respective charges for all Azure subscriptions and other purchases from the previous month.
-A billing profile is automatically created for your billing account. It contains one invoice section by default. You may create more sections to easily track and organize costs based on your needs whether is it per project, department, or development environment. The sections are shown on the billing profile's invoice reflecting the usage of each subscription and purchases you've assigned to it.
+A billing profile is automatically created for your billing account. It contains one invoice section by default. You can create more sections to easily track and organize costs based on your needs whether is it per project, department, or development environment. The sections are shown on the billing profile's invoice reflecting the usage of each subscription and purchases you assigned to it.
Roles on the billing profiles have permissions to view and manage invoices and payment methods. Assign these roles to users who pay invoices like members of the accounting team in your organization. For more information, see [billing profile roles and tasks](../manage/understand-mca-roles.md#billing-profile-roles-and-tasks).
+> [!NOTE]
+> If you bought Azure through a Cloud Solution Provider (CSP), you might see a billing profile with a tooltip showing that it's a billing profile for purchases made through your partner. Contact your partner for questions about the billing profile.
### Each billing profile gets a monthly invoice A monthly invoice is generated at the beginning of the month for each billing profile. The invoice contains all charges from the previous month.
-You can view the invoice, download documents and the change setting to get future invoices by email, in the Azure portal. For more information, see [download invoices for a Microsoft Customer Agreement](../manage/download-azure-invoice-daily-usage-date.md#download-invoices-for-a-microsoft-customer-agreement).
+You can view the invoice, download documents, and the change setting to get future invoices by email, in the Azure portal. For more information, see [download invoices for a Microsoft Customer Agreement](../manage/download-azure-invoice-daily-usage-date.md#download-invoices-for-a-microsoft-customer-agreement).
If an invoice becomes overdue, past-due email notifications are only sent to users with role assignments on the overdue billing profile. Ensure that users who should receive overdue notifications have one of the following roles:
Apply policies to control Azure Marketplace and Reservation purchases using a bi
### Azure plans determine pricing and service level agreement for subscriptions
-Azure plans determine the pricing and service level agreements for Azure subscriptions. They're automatically enabled when you create a billing profile. All invoice sections that are associated with the billing profile can use these plans. Users with access to the invoice section use the plans to create Azure subscriptions. The following Azure plans are supported in billing accounts for Microsoft Customer Agreement:
+Azure plans determine the pricing and service level agreements for Azure subscriptions. They automatically get enabled when you create a billing profile. All invoice sections that are associated with the billing profile can use these plans. Users with access to the invoice section use the plans to create Azure subscriptions. The following Azure plans are supported in billing accounts for Microsoft Customer Agreement:
| Plan | Definition | ||-|
Azure plans determine the pricing and service level agreements for Azure subscri
## Invoice sections
-Create invoice sections to organize the costs on your invoice. For example, you may need a single invoice for your organization but want to organize costs by department, team, or project. For this scenario, you have a single billing profile where you create an invoice section for each department, team, or project.
+Create invoice sections to organize the costs on your invoice. For example, you might need a single invoice for your organization but want to organize costs by department, team, or project. For this scenario, you have a single billing profile where you create an invoice section for each department, team, or project.
When an invoice section is created, you can give others permission to create Azure subscriptions that are billed to the section. Any usage charges and purchases for the subscriptions are then billed to the section.
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-linked-services.md
You can create an Azure HDInsight linked service to register your own HDInsight
## Azure Batch linked service You can create an Azure Batch linked service to register a Batch pool of virtual machines (VMs) to a data or Synapse workspace. You can run Custom activity using Azure Batch.
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipeline-execution-triggers.md
For a complete sample, see [Quickstart: Create a data factory by using the .NET
### Azure PowerShell The following sample command shows you how to manually run your pipeline by using Azure PowerShell:
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-table-storage.md
Last updated 01/05/2024
This article outlines how to use Copy Activity in Azure Data Factory and Synapse Analytics pipelines to copy data to and from Azure Table storage. It builds on the [Copy Activity overview](copy-activity-overview.md) article that presents a general overview of Copy Activity. ## Supported capabilities
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
In Azure Data Factory, continuous integration and delivery (CI/CD) means moving
- Automated deployment using Data Factory's integration with [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) - Manually upload a Resource Manager template using Data Factory UX integration with Azure Resource Manager. ## CI/CD lifecycle
data-factory Control Flow If Condition Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-if-condition-activity.md
The pipeline sets the **folderPath** to the value of either **outputPath1** or *
### PowerShell commands These commands assume that you have saved the JSON files into the folder: C:\ADF.
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-switch-activity.md
The pipeline sets the **folderPath** to the value of either **outputPath1** or *
### PowerShell commands These commands assume that you've saved the JSON files into the folder: C:\ADF.
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-until-activity.md
The pipeline sets the **folderPath** to the value of either **outputPath1** or *
### PowerShell commands These commands assume that you have saved the JSON files into the folder: C:\ADF.
data-factory Copy Activity Performance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-features.md
Previously updated : 01/05/2024 Last updated : 06/17/2024
When you select a Copy activity on the pipeline editor canvas and choose the Set
A Data Integration Unit is a measure that represents the power (a combination of CPU, memory, and network resource allocation) of a single unit within the service. Data Integration Unit only applies to [Azure integration runtime](concepts-integration-runtime.md#azure-integration-runtime), but not [self-hosted integration runtime](concepts-integration-runtime.md#self-hosted-integration-runtime).
-The allowed DIUs to empower a copy activity run is **between 2 and 256**. If not specified or you choose "Auto" on the UI, the service dynamically applies the optimal DIU setting based on your source-sink pair and data pattern. The following table lists the supported DIU ranges and default behavior in different copy scenarios:
+The allowed DIUs to empower a copy activity run is **between 4 and 256**. If not specified or you choose "Auto" on the UI, the service dynamically applies the optimal DIU setting based on your source-sink pair and data pattern. The following table lists the supported DIU ranges and default behavior in different copy scenarios:
| Copy scenario | Supported DIU range | Default DIUs determined by service | |: |: |- |
-| Between file stores |- **Copy from or to single file**: 2-4 <br>- **Copy from and to multiple files**: 2-256 depending on the number and size of the files <br><br>For example, if you copy data from a folder with 4 large files and choose to preserve hierarchy, the max effective DIU is 16; when you choose to merge file, the max effective DIU is 4. |Between 4 and 32 depending on the number and size of the files |
-| From file store to non-file store |- **Copy from single file**: 2-4 <br/>- **Copy from multiple files**: 2-256 depending on the number and size of the files <br/><br/>For example, if you copy data from a folder with 4 large files, the max effective DIU is 16. |- **Copy into Azure SQL Database or Azure Cosmos DB**: between 4 and 16 depending on the sink tier (DTUs/RUs) and source file pattern<br>- **Copy into Azure Synapse Analytics** using PolyBase or COPY statement: 2<br>- Other scenario: 4 |
-| From non-file store to file store |- **Copy from partition-option-enabled data stores** (including [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md#azure-database-for-postgresql-as-source), [Azure SQL Database](connector-azure-sql-database.md#azure-sql-database-as-the-source), [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#sql-managed-instance-as-a-source), [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#azure-synapse-analytics-as-the-source), [Oracle](connector-oracle.md#oracle-as-source), [Netezza](connector-netezza.md#netezza-as-source), [SQL Server](connector-sql-server.md#sql-server-as-a-source), and [Teradata](connector-teradata.md#teradata-as-source)): 2-256 when writing to a folder, and 2-4 when writing to one single file. Note per source data partition can use up to 4 DIUs.<br>- **Other scenarios**: 2-4 |- **Copy from REST or HTTP**: 1<br/>- **Copy from Amazon Redshift** using UNLOAD: 2<br>- **Other scenario**: 4 |
-| Between non-file stores |- **Copy from partition-option-enabled data stores** (including [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md#azure-database-for-postgresql-as-source), [Azure SQL Database](connector-azure-sql-database.md#azure-sql-database-as-the-source), [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#sql-managed-instance-as-a-source), [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#azure-synapse-analytics-as-the-source), [Oracle](connector-oracle.md#oracle-as-source), [Netezza](connector-netezza.md#netezza-as-source), [SQL Server](connector-sql-server.md#sql-server-as-a-source), and [Teradata](connector-teradata.md#teradata-as-source)): 2-256 when writing to a folder, and 2-4 when writing to one single file. Note per source data partition can use up to 4 DIUs.<br/>- **Other scenarios**: 2-4 |- **Copy from REST or HTTP**: 1<br>- **Other scenario**: 4 |
+| Between file stores |- **Copy from or to single file**: 4 <br>- **Copy from and to multiple files**: 4-256 depending on the number and size of the files <br><br>For example, if you copy data from a folder with 4 large files and choose to preserve hierarchy, the max effective DIU is 16; when you choose to merge file, the max effective DIU is 4. |Between 4 and 32 depending on the number and size of the files |
+| From file store to non-file store |- **Copy from single file**: 4 <br/>- **Copy from multiple files**: 4-256 depending on the number and size of the files <br/><br/>For example, if you copy data from a folder with 4 large files, the max effective DIU is 16. |- **Copy into Azure SQL Database or Azure Cosmos DB**: between 4 and 16 depending on the sink tier (DTUs/RUs) and source file pattern<br>- **Copy into Azure Synapse Analytics** using PolyBase or COPY statement: 2<br>- Other scenario: 4 |
+| From non-file store to file store |- **Copy from partition-option-enabled data stores** (including [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md#azure-database-for-postgresql-as-source), [Azure SQL Database](connector-azure-sql-database.md#azure-sql-database-as-the-source), [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#sql-managed-instance-as-a-source), [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#azure-synapse-analytics-as-the-source), [Oracle](connector-oracle.md#oracle-as-source), [Netezza](connector-netezza.md#netezza-as-source), [SQL Server](connector-sql-server.md#sql-server-as-a-source), and [Teradata](connector-teradata.md#teradata-as-source)): 4-256 when writing to a folder, and 4 when writing to one single file. Note per source data partition can use up to 4 DIUs.<br>- **Other scenarios**: 4 |- **Copy from REST or HTTP**: 1<br/>- **Copy from Amazon Redshift** using UNLOAD: 4<br>- **Other scenario**: 4 |
+| Between non-file stores |- **Copy from partition-option-enabled data stores** (including [Azure Database for PostgreSQL](connector-azure-database-for-postgresql.md#azure-database-for-postgresql-as-source), [Azure SQL Database](connector-azure-sql-database.md#azure-sql-database-as-the-source), [Azure SQL Managed Instance](connector-azure-sql-managed-instance.md#sql-managed-instance-as-a-source), [Azure Synapse Analytics](connector-azure-sql-data-warehouse.md#azure-synapse-analytics-as-the-source), [Oracle](connector-oracle.md#oracle-as-source), [Netezza](connector-netezza.md#netezza-as-source), [SQL Server](connector-sql-server.md#sql-server-as-a-source), and [Teradata](connector-teradata.md#teradata-as-source)): 4-256 when writing to a folder, and 4 when writing to one single file. Note per source data partition can use up to 4 DIUs.<br/>- **Other scenarios**: 4 |- **Copy from REST or HTTP**: 1<br>- **Other scenario**: 4 |
You can see the DIUs used for each copy run in the copy activity monitoring view or activity output. For more information, see [Copy activity monitoring](copy-activity-monitoring.md). To override this default, specify a value for the `dataIntegrationUnits` property as follows. The *actual number of DIUs* that the copy operation uses at run time is equal to or less than the configured value, depending on your data pattern.
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-integration-runtime.md
Azure IR provides a fully managed compute to natively perform data movement and
This document introduces how you can create and configure Azure Integration Runtime. ## Default Azure IR By default, each data factory or Synapse workspace has an Azure IR in the backend that supports operations on cloud data stores and compute services in public network. The location of that Azure IR is autoresolve. If **connectVia** property isn't specified in the linked service definition, the default Azure IR is used. You only need to explicitly create an Azure IR when you would like to explicitly define the location of the IR, or if you would like to virtually group the activity executions on different IRs for management purpose.
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime.md
These articles shows how to provision an Azure-SSIS IR by using the [Azure porta
## Prerequisites - **Azure subscription**. If you don't already have a subscription, you can create a [free trial](https://azure.microsoft.com/pricing/free-trial/) account.
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
A self-hosted integration runtime can run copy activities between a cloud data s
This article describes how you can create and configure a self-hosted IR. ## Considerations for using a self-hosted IR
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
To create a shared self-hosted IR using Azure PowerShell, you can take following
### Prerequisites - **Azure subscription**. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
You must also create or assign an existing virtual machine to run the self-hoste
1. Select **Review + create**. 1. Review the settings, and then select **Create**. ### Create a private endpoint
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-service-identity.md
This article helps you understand managed identity (formerly known as Managed Service Identity/MSI) and how it works in Azure Data Factory. ## Overview
data-factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-movement-security-considerations.md
In this article, we review security considerations in the following two data mov
- **Cloud scenario**: In this scenario, both your source and your destination are publicly accessible through the internet. These include managed cloud storage services such as Azure Storage, Azure Synapse Analytics, Azure SQL Database, Azure Data Lake Store, Amazon S3, Amazon Redshift, SaaS services such as Salesforce, and web protocols such as FTP and OData. Find a complete list of supported data sources in [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats). - **Hybrid scenario**: In this scenario, either your source or your destination is behind a firewall or inside an on-premises corporate network. Or, the data store is in a private network or virtual network (most often the source) and is not publicly accessible. Database servers hosted on virtual machines also fall under this scenario. ## Cloud scenarios
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
For more info about the managed identity for your ADF, see [Managed identity for
> > - If you have already created your Azure-SSIS IR using SQL authentication, you can not reconfigure it to use Microsoft Entra authentication via PowerShell at this time, but you can do so via Azure portal/ADF app. <a name='enable-azure-ad-authentication-on-azure-sql-database'></a>
data-factory Encrypt Credentials Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md
You can encrypt and store credentials for any of your on-premises data stores (linked services with sensitive information) on a machine with self-hosted integration runtime. You pass a JSON definition file with credentials to the <br/>[**New-AzDataFactoryV2LinkedServiceEncryptedCredential**](/powershell/module/az.datafactory/New-AzDataFactoryV2LinkedServiceEncryptedCredential) cmdlet to produce an output JSON definition file with the encrypted credentials. Then, use the updated JSON definition to create the linked services.
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
The following limitations apply only to standard custom setups:
## Prerequisites To customize your Azure-SSIS IR, you need the following items:
data-factory How To Configure Azure Ssis Ir Enterprise Edition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md
Some of these features require you to install additional components to customize
## Instructions 1. Download and install [Azure PowerShell](/powershell/azure/install-azure-powershell).
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
This article provides information about the schedule trigger and the steps to create, start, and monitor a schedule trigger. For other types of triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md).
-When creating a schedule trigger, you specify a schedule (start date, recurrence, end date etc.) for the trigger, and associate with a pipeline. Pipelines and triggers have a many-to-many relationship. Multiple triggers can kick off a single pipeline. A single trigger can kick off multiple pipelines.
+When you create a *schedule trigger*, you specify a schedule like a start date, recurrence, or end date for the trigger and associate it with a pipeline. Pipelines and triggers have a many-to-many relationship. Multiple triggers can kick off a single pipeline. A single trigger can kick off multiple pipelines.
-The following sections provide steps to create a schedule trigger in different ways.
+The following sections provide steps to create a schedule trigger in different ways.
-## Azure Data Factory and Synapse portal experience
+## Azure Data Factory and Azure Synapse portal experience
-You can create a **schedule trigger** to schedule a pipeline to run periodically (hourly, daily, etc.).
+You can create a schedule trigger to schedule a pipeline to run periodically, such as hourly or daily.
> [!NOTE]
-> For a complete walkthrough of creating a pipeline and a schedule trigger, which associates the trigger with the pipeline, and runs and monitors the pipeline, see [Quickstart: create a data factory using Data Factory UI](quickstart-create-data-factory-portal.md).
+> For a complete walkthrough of creating a pipeline and a schedule trigger, which associates the trigger with the pipeline and runs and monitors the pipeline, see [Quickstart: Create a data factory by using Data Factory UI](quickstart-create-data-factory-portal.md).
-1. Switch to the **Edit** tab in Data Factory or the Integrate tab in Azure Synapse.
+1. Switch to the **Edit** tab in Data Factory or the **Integrate** tab in Azure Synapse.
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="./media/how-to-create-schedule-trigger/switch-edit-tab.png" alt-text="Switch to Edit tab":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/switch-edit-tab.png" alt-text="Screenshot that shows switching to the Edit tab.":::
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="./media/how-to-create-schedule-trigger/switch-edit-tab-synapse.png" alt-text="Switch to Edit tab":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/switch-edit-tab-synapse.png" alt-text="Screenshot that shows switching to the Integrate tab.":::
-2. Select **Trigger** on the menu, then select **New/Edit**.
+2. Select **Trigger** on the menu, and then select **New/Edit**.
- :::image type="content" source="./media/how-to-create-schedule-trigger/new-trigger-menu.png" alt-text="New trigger menu":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/new-trigger-menu.png" alt-text="Screenshot that shows the New trigger menu.":::
-1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**.
+1. On the **Add triggers** page, select **Choose trigger**, and then select **New**.
- :::image type="content" source="./media/how-to-create-schedule-trigger/add-trigger-new-button.png" alt-text="Add triggers - new trigger":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/add-trigger-new-button.png" alt-text="Screenshot that shows the Add triggers pane.":::
-1. On the **New Trigger** page, do the following steps:
+1. On the **New trigger** page:
1. Confirm that **Schedule** is selected for **Type**. 1. Specify the start datetime of the trigger for **Start Date**. It's set to the current datetime in Coordinated Universal Time (UTC) by default.
- 1. Specify the time zone that the trigger will be created in. The time zone setting will apply to **Start Date**, **End Date**, and **Schedule Execution Times** in Advanced recurrence options. Changing Time Zone setting will not automatically change your start date. Make sure the Start Date is correct in the specified time zone. Please note that Scheduled Execution time of Trigger will be considered post the Start Date (Ensure Start Date is at least 1 minute lesser than the Execution time else it will trigger pipeline in next recurrence).
+ 1. Specify the time zone in which the trigger is created. The time zone setting applies to **Start Date**, **End Date**, and **Schedule Execution Times** in **Advanced recurrence options**. Changing the **Time Zone** setting doesn't automatically change your start date. Make sure the start date is correct in the specified time zone. The **Scheduled Execution time of Trigger** is considered post the start date. (Ensure that the start date is at least 1 minute less than the execution time or else it triggers the pipeline in the next recurrence.)
> [!NOTE]
- > For time zones that observe daylight saving, trigger time will auto-adjust for the twice a year change, if the recurrence is set to _Days_ or above. To opt out of the daylight saving change, please select a time zone that does not observe daylight saving, for instance UTC
+ > For time zones that observe daylight saving, trigger time auto-adjusts for the twice-a-year change, if the recurrence is set to **Days** or above. To opt out of the daylight saving change, select a time zone that doesn't observe daylight saving, for instance, UTC.
+ >
+ > Daylight saving adjustment only happens for a trigger with the recurrence set to **Days** or above. If the trigger is set to **Hours** or **Minutes** frequency, it continues to fire at regular intervals.
- > [!IMPORTANT]
- > Daylight saving adjustment only happens for trigger with recurrence set to _Days_ or above. If the trigger is set to _Hours_ or _Minutes_ frequency, it will continue to fire at regular intervals.
+ 1. Specify **Recurrence** for the trigger. Select one of the values from the dropdown list (**Every minute**, **Hourly**, **Daily**, **Weekly**, or **Monthly**). Enter the multiplier in the text box. For example, if you want the trigger to run once for every 15 minutes, you select **Every Minute** and enter **15** in the text box.
+ 1. Under **Recurrence**, if you choose **Day(s)**, **Week(s)**, or **Month(s)** from the dropdown list, you can see **Advanced recurrence options**.
+
+ :::image type="content" source="./media/how-to-create-schedule-trigger/advanced.png" alt-text="Screenshot that shows the advanced recurrence options of Day(s), Week(s), and Month(s).":::
+
+ 1. To specify an end-date time, select **Specify an end date**. Specify the **Ends On** information, and then select **OK**.
- 1. Specify **Recurrence** for the trigger. Select one of the values from the drop-down list (Every minute, Hourly, Daily, Weekly, and Monthly). Enter the multiplier in the text box. For example, if you want the trigger to run once for every 15 minutes, you select **Every Minute**, and enter **15** in the text box.
- 1. In the **Recurrence**, if you choose "Day(s), Week(s) or Month(s)" from the drop-down, you can find "Advanced recurrence options".
- :::image type="content" source="./media/how-to-create-schedule-trigger/advanced.png" alt-text="Advanced recurrence options of Day(s), Week(s) or Month(s)":::
- 1. To specify an end date time, select **Specify an End Date**, and specify _Ends On_, then select **OK**. There is a cost associated with each pipeline run. If you are testing, you may want to ensure that the pipeline is triggered only a couple of times. However, ensure that there is enough time for the pipeline to run between the publish time and the end time. The trigger comes into effect only after you publish the solution, not when you save the trigger in the UI.
+ A cost is associated with each pipeline run. If you're testing, you might want to ensure that the pipeline is triggered only a couple of times. However, ensure that there's enough time for the pipeline to run between the publish time and the end time. The trigger comes into effect only after you publish the solution, not when you save the trigger in the UI.
- :::image type="content" source="./media/how-to-create-schedule-trigger/trigger-settings-01.png" alt-text="Trigger settings":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/trigger-settings-01.png" alt-text="Screenshot that shows the trigger settings.":::
- :::image type="content" source="./media/how-to-create-schedule-trigger/trigger-settings-02.png" alt-text="Trigger settings for End Date":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/trigger-settings-02.png" alt-text="Screenshot that shows trigger settings for the end date and time.":::
-1. In the **New Trigger** window, select **Yes** in the **Activated** option, then select **OK**. You can use this checkbox to deactivate the trigger later.
+1. In the **New Trigger** window, select **Yes** in the **Activated** option, and then select **OK**. You can use this checkbox to deactivate the trigger later.
- :::image type="content" source="./media/how-to-create-schedule-trigger/trigger-settings-next.png" alt-text="Trigger settings - Next button":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/trigger-settings-next.png" alt-text="Screenshot that shows the Activated option.":::
-1. In the **New Trigger** window, review the warning message, then select **OK**.
+1. In the **New Trigger** window, review the warning message and then select **OK**.
- :::image type="content" source="./media/how-to-create-schedule-trigger/new-trigger-finish.png" alt-text="Trigger settings - Finish button":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/new-trigger-finish.png" alt-text="Screenshot that shows selecting the OK button.":::
-1. Select **Publish all** to publish the changes. Until you publish the changes, the trigger doesn't start triggering the pipeline runs.
+1. Select **Publish all** to publish the changes. Until you publish the changes, the trigger doesn't start triggering the pipeline runs.
- :::image type="content" source="./media/how-to-create-schedule-trigger/publish-2.png" alt-text="Publish button":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/publish-2.png" alt-text="Screenshot that shows the Publish all button.":::
-1. Switch to the **Pipeline runs** tab on the left, then select **Refresh** to refresh the list. You will see the pipeline runs triggered by the scheduled trigger. Notice the values in the **Triggered By** column. If you use the **Trigger Now** option, you will see the manual trigger run in the list.
+1. Switch to the **Pipeline runs** tab on the left, and then select **Refresh** to refresh the list. You see the pipeline runs triggered by the scheduled trigger. Notice the values in the **Triggered By** column. If you use the **Trigger Now** option, you see the manual trigger run in the list.
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-triggered-runs.png" alt-text="Monitor triggered runs":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-triggered-runs.png" alt-text="Screenshot that shows monitoring triggered runs in Data Factory.":::
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-triggered-runs-synapse.png" alt-text="Monitor triggered runs":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-triggered-runs-synapse.png" alt-text="Screenshot that shows monitoring triggered runs in Synapse Analytics.":::
-9. Switch to the **Trigger Runs** \ **Schedule** view.
+9. Switch to the **Trigger runs** > **Schedule** view.
# [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-trigger-runs.png" alt-text="Monitor trigger runs":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-trigger-runs.png" alt-text="Screenshot that shows monitoring the schedule for trigger runs in Data Factory.":::
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-trigger-runs-synapse.png" alt-text="Monitor trigger runs":::
+ :::image type="content" source="./media/how-to-create-schedule-trigger/monitor-trigger-runs-synapse.png" alt-text="Screenshot that shows monitoring the schedule for trigger runs in Synapse Analytics.":::
## Azure PowerShell
-This section shows you how to use Azure PowerShell to create, start, and monitor a schedule trigger. To see this sample working, first go through the [Quickstart: Create a data factory by using Azure PowerShell](quickstart-create-data-factory-powershell.md). Then, add the following code to the main method, which creates and starts a schedule trigger that runs every 15 minutes. The trigger is associated with a pipeline named **Adfv2QuickStartPipeline** that you create as part of the Quickstart.
+This section shows you how to use Azure PowerShell to create, start, and monitor a schedule trigger. To see this sample working, first go through [Quickstart: Create a data factory by using Azure PowerShell](quickstart-create-data-factory-powershell.md). Then, add the following code to the main method, which creates and starts a schedule trigger that runs every 15 minutes. The trigger is associated with a pipeline named `Adfv2QuickStartPipeline` that you create as part of the quickstart.
### Prerequisites -- **Azure subscription**. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- **Azure subscription**. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell).
-- **Azure PowerShell**. Follow the instructions in [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell).
+### Sample code
-### Sample Code
-
-1. Create a JSON file named **MyTrigger.json** in the C:\ADFv2QuickStartPSH\ folder with the following content:
+1. Create a JSON file named *MyTrigger.json* in the *C:\ADFv2QuickStartPSH\* folder with the following content:
> [!IMPORTANT]
- > Before you save the JSON file, set the value of the **startTime** element to the current UTC time. Set the value of the **endTime** element to one hour past the current UTC time.
+ > Before you save the JSON file, set the value of the `startTime` element to the current UTC time. Set the value of the `endTime` element to one hour past the current UTC time.
```json {
This section shows you how to use Azure PowerShell to create, start, and monitor
``` In the JSON snippet:
- - The **type** element of the trigger is set to "ScheduleTrigger".
- - The **frequency** element is set to "Minute" and the **interval** element is set to 15. As such, the trigger runs the pipeline every 15 minutes between the start and end times.
- - The **timeZone** element specifies the time zone that the trigger is created in. This setting affects both **startTime** and **endTime**.
- - The **endTime** element is one hour after the value of the **startTime** element. As such, the trigger runs the pipeline 15 minutes, 30 minutes, and 45 minutes after the start time. Don't forget to update the start time to the current UTC time, and the end time to one hour past the start time.
+
+ - The `type` element of the trigger is set to `ScheduleTrigger`.
+ - The `frequency` element is set to `Minute` and the `interval` element is set to `15`. As such, the trigger runs the pipeline every 15 minutes between the start and end times.
+ - The `timeZone` element specifies the time zone in which the trigger is created. This setting affects both `startTime` and `endTime`.
+ - The `endTime` element is one hour after the value of the `startTime` element. As such, the trigger runs the pipeline 15 minutes, 30 minutes, and 45 minutes after the start time. Don't forget to update the start time to the current UTC time and the end time to one hour past the start time.
> [!IMPORTANT]
- > For UTC timezone, the startTime and endTime need to follow format 'yyyy-MM-ddTHH:mm:ss**Z**', while for other timezones, startTime and endTime follow 'yyyy-MM-ddTHH:mm:ss'.
- >
- > Per ISO 8601 standard, the _Z_ suffix to timestamp mark the datetime to UTC timezone, and render timeZone field useless. While missing _Z_ suffix for UTC time zone will result in an error upon trigger _activation_.
+ > For the UTC time zone, `startTime` and `endTime` need to follow the format `yyyy-MM-ddTHH:mm:ss`**Z**. For other time zones, `startTime` and `endTime` follow the `yyyy-MM-ddTHH:mm:ss` format.
+ >
+ > Per the ISO 8601 standard, the `Z` suffix is used to timestamp mark the datetime to the UTC time zone and render the `timeZone` field useless. If the `Z` suffix for the UTC time zone is missing, the result is an error upon trigger _activation_.
- - The trigger is associated with the **Adfv2QuickStartPipeline** pipeline. To associate multiple pipelines with a trigger, add more **pipelineReference** sections.
- - The pipeline in the Quickstart takes two **parameters** values: **inputPath** and **outputPath**. And you pass values for these parameters from the trigger.
+ - The trigger is associated with the `Adfv2QuickStartPipeline` pipeline. To associate multiple pipelines with a trigger, add more `pipelineReference` sections.
+ - The pipeline in the quickstart takes two `parameters` values: `inputPath` and `outputPath`. You pass values for these parameters from the trigger.
1. Create a trigger by using the [Set-AzDataFactoryV2Trigger](/powershell/module/az.datafactory/set-azdatafactoryv2trigger) cmdlet:
This section shows you how to use Azure PowerShell to create, start, and monitor
Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "MyTrigger" ```
-1. Get the trigger runs in Azure PowerShell by using the [Get-AzDataFactoryV2TriggerRun](/powershell/module/az.datafactory/get-azdatafactoryv2triggerrun) cmdlet. To get the information about the trigger runs, execute the following command periodically. Update the **TriggerRunStartedAfter** and **TriggerRunStartedBefore** values to match the values in your trigger definition:
+1. Get the trigger runs in Azure PowerShell by using the [Get-AzDataFactoryV2TriggerRun](/powershell/module/az.datafactory/get-azdatafactoryv2triggerrun) cmdlet. To get the information about the trigger runs, execute the following command periodically. Update the `TriggerRunStartedAfter` and `TriggerRunStartedBefore` values to match the values in your trigger definition:
```powershell Get-AzDataFactoryV2TriggerRun -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -TriggerName "MyTrigger" -TriggerRunStartedAfter "2017-12-08T00:00:00" -TriggerRunStartedBefore "2017-12-08T01:00:00" ``` > [!NOTE]
- > Trigger time of Schedule triggers are specified in UTC timestamp. _TriggerRunStartedAfter_ and _TriggerRunStartedBefore_ also expects UTC timestamp
+ > The trigger time of schedule triggers are specified in the UTC timestamp. `TriggerRunStartedAfter` and `TriggerRunStartedBefore` also expect the UTC timestamp.
To monitor the trigger runs and pipeline runs in the Azure portal, see [Monitor pipeline runs](quickstart-create-data-factory-resource-manager-template.md#monitor-the-pipeline). ## Azure CLI
-This section shows you how to use Azure CLI to create, start, and monitor a schedule trigger. To see this sample working, first go through the [Quickstart: Create an Azure Data Factory using Azure CLI](./quickstart-create-data-factory-azure-cli.md). Then, follow the steps below to create and start a schedule trigger that runs every 15 minutes. The trigger is associated with a pipeline named **Adfv2QuickStartPipeline** that you create as part of the Quickstart.
+This section shows you how to use the Azure CLI to create, start, and monitor a schedule trigger. To see this sample working, first go through [Quickstart: Create an Azure Data Factory by using the Azure CLI](./quickstart-create-data-factory-azure-cli.md). Then, follow the steps to create and start a schedule trigger that runs every 15 minutes. The trigger is associated with a pipeline named `Adfv2QuickStartPipeline` that you create as part of the quickstart.
### Prerequisites [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
-### Sample Code
+### Sample code
-1. In your working directory, create a JSON file named **MyTrigger.json** with the trigger's properties. For this example use the following content:
+1. In your working directory, create a JSON file named *MyTrigger.json* with the trigger's properties. For this example, use the following content:
> [!IMPORTANT]
- > Before you save the JSON file, set the value of the **startTime** element to the current UTC time. Set the value of the **endTime** element to one hour past the current UTC time.
+ > Before you save the JSON file, set the value of the `startTime` element to the current UTC time. Set the value of the `endTime` element to one hour past the current UTC time.
```json {
This section shows you how to use Azure CLI to create, start, and monitor a sche
``` In the JSON snippet:
- - The **type** element of the trigger is set to "ScheduleTrigger".
- - The **frequency** element is set to "Minute" and the **interval** element is set to 15. As such, the trigger runs the pipeline every 15 minutes between the start and end times.
- - The **timeZone** element specifies the time zone that the trigger is created in. This setting affects both **startTime** and **endTime**.
- - The **endTime** element is one hour after the value of the **startTime** element. As such, the trigger runs the pipeline 15 minutes, 30 minutes, and 45 minutes after the start time. Don't forget to update the start time to the current UTC time, and the end time to one hour past the start time.
+
+ - The `type` element of the trigger is set to `ScheduleTrigger`.
+ - The `frequency` element is set to `Minute` and the `interval` element is set to `15`. As such, the trigger runs the pipeline every 15 minutes between the start and end times.
+ - The `timeZone` element specifies the time zone in which the trigger is created. This setting affects both `startTime` and `endTime`.
+ - The `endTime` element is one hour after the value of the `startTime` element. As such, the trigger runs the pipeline 15 minutes, 30 minutes, and 45 minutes after the start time. Don't forget to update the start time to the current UTC time and the end time to one hour past the start time.
> [!IMPORTANT]
- > For UTC timezone, the startTime and endTime need to follow format 'yyyy-MM-ddTHH:mm:ss**Z**', while for other timezones, startTime and endTime follow 'yyyy-MM-ddTHH:mm:ss'.
- >
- > Per ISO 8601 standard, the _Z_ suffix to timestamp mark the datetime to UTC timezone, and render timeZone field useless. While missing _Z_ suffix for UTC time zone will result in an error upon trigger _activation_.
+ > For the UTC time zone, the `startTime` and endTime need to follow the format `yyyy-MM-ddTHH:mm:ss`**Z**. For other time zones, `startTime` and `endTime` follow the `yyyy-MM-ddTHH:mm:ss` format.
+ >
+ > Per the ISO 8601 standard, the _Z_ suffix is used to timestamp mark the datetime to the UTC time zone and render the `timeZone` field useless. If the _Z_ suffix is missing for the UTC time zone, the result is an error upon trigger _activation_.
- - The trigger is associated with the **Adfv2QuickStartPipeline** pipeline. To associate multiple pipelines with a trigger, add more **pipelineReference** sections.
- - The pipeline in the Quickstart takes two **parameters** values: **inputPath** and **outputPath**. And you pass values for these parameters from the trigger.
+ - The trigger is associated with the `Adfv2QuickStartPipeline` pipeline. To associate multiple pipelines with a trigger, add more `pipelineReference` sections.
+ - The pipeline in the quickstart takes two `parameters` values: `inputPath` and `outputPath`. You pass values for these parameters from the trigger.
1. Create a trigger by using the [az datafactory trigger create](/cli/azure/datafactory/trigger#az-datafactory-trigger-create) command:
This section shows you how to use Azure CLI to create, start, and monitor a sche
az datafactory trigger show --resource-group "ADFQuickStartRG" --factory-name "ADFTutorialFactory" --name "MyTrigger" ```
-1. Get the trigger runs in Azure CLI by using the [az datafactory trigger-run query-by-factory](/cli/azure/datafactory/trigger-run#az-datafactory-trigger-run-query-by-factory) command. To get information about the trigger runs, execute the following command periodically. Update the **last-updated-after** and **last-updated-before** values to match the values in your trigger definition:
+1. Get the trigger runs in the Azure CLI by using the [az datafactory trigger-run query-by-factory](/cli/azure/datafactory/trigger-run#az-datafactory-trigger-run-query-by-factory) command. To get information about the trigger runs, execute the following command periodically. Update the `last-updated-after` and `last-updated-before` values to match the values in your trigger definition:
```azurecli az datafactory trigger-run query-by-factory --resource-group "ADFQuickStartRG" --factory-name "ADFTutorialFactory" --filters operand="TriggerName" operator="Equals" values="MyTrigger" --last-updated-after "2017-12-08T00:00:00" --last-updated-before "2017-12-08T01:00:00" ``` > [!NOTE]
- > Trigger time of Schedule triggers are specified in UTC timestamp. _last-updated-after_ and _last-updated-before_ also expects UTC timestamp
+ > The trigger times of schedule triggers are specified in the UTC timestamp. _last-updated-after_ and _last-updated-before_ also expects the UTC timestamp.
To monitor the trigger runs and pipeline runs in the Azure portal, see [Monitor pipeline runs](quickstart-create-data-factory-resource-manager-template.md#monitor-the-pipeline). ## .NET SDK
-This section shows you how to use the .NET SDK to create, start, and monitor a trigger. To see this sample working, first go through the [Quickstart: Create a data factory by using the .NET SDK](quickstart-create-data-factory-dot-net.md). Then, add the following code to the main method, which creates and starts a schedule trigger that runs every 15 minutes. The trigger is associated with a pipeline named **Adfv2QuickStartPipeline** that you create as part of the Quickstart.
+This section shows you how to use the .NET SDK to create, start, and monitor a trigger. To see this sample working, first go through [Quickstart: Create a data factory by using the .NET SDK](quickstart-create-data-factory-dot-net.md). Then, add the following code to the main method, which creates and starts a schedule trigger that runs every 15 minutes. The trigger is associated with a pipeline named `Adfv2QuickStartPipeline` that you create as part of the quickstart.
To create and start a schedule trigger that runs every 15 minutes, add the following code to the main method:
To create and start a schedule trigger that runs every 15 minutes, add the follo
client.Triggers.Start(resourceGroup, dataFactoryName, triggerName); ```
-To create triggers in a different time zone, other than UTC, following settings are required:
+To create triggers in a different time zone, other than UTC, the following settings are required:
+ ```csharp <<ClientInstance>>.SerializationSettings.DateFormatHandling = Newtonsoft.Json.DateFormatHandling.IsoDateFormat; <<ClientInstance>>.SerializationSettings.DateTimeZoneHandling = Newtonsoft.Json.DateTimeZoneHandling.Unspecified;
To monitor the trigger runs and pipeline runs in the Azure portal, see [Monitor
## Python SDK
-This section shows you how to use the Python SDK to create, start, and monitor a trigger. To see this sample working, first go through the [Quickstart: Create a data factory by using the Python SDK](quickstart-create-data-factory-python.md). Then, add the following code block after the "monitor the pipeline run" code block in the Python script. This code creates a schedule trigger that runs every 15 minutes between the specified start and end times. Update the **start_time** variable to the current UTC time, and the **end_time** variable to one hour past the current UTC time.
+This section shows you how to use the Python SDK to create, start, and monitor a trigger. To see this sample working, first go through [Quickstart: Create a data factory by using the Python SDK](quickstart-create-data-factory-python.md). Then, add the following code block after the `monitor the pipeline run` code block in the Python script. This code creates a schedule trigger that runs every 15 minutes between the specified start and end times. Update the `start_time` variable to the current UTC time and the `end_time` variable to one hour past the current UTC time.
```python # Create a trigger
To monitor the trigger runs and pipeline runs in the Azure portal, see [Monitor
## Azure Resource Manager template
-You can use an Azure Resource Manager template to create a trigger. For step-by-step instructions, see [Create an Azure data factory by using a Resource Manager template](quickstart-create-data-factory-resource-manager-template.md).
+You can use an Azure Resource Manager template to create a trigger. For step-by-step instructions, see [Create an Azure data factory by using an Azure Resource Manager template](quickstart-create-data-factory-resource-manager-template.md).
## Pass the trigger start time to a pipeline
-Azure Data Factory version 1 supports reading or writing partitioned data by using the system variables: **SliceStart**, **SliceEnd**, **WindowStart**, and **WindowEnd**. In the current version of Azure Data Factory and Synapse pipelines, you can achieve this behavior by using a pipeline parameter. The start time and scheduled time for the trigger are set as the value for the pipeline parameter. In the following example, the scheduled time for the trigger is passed as a value to the pipeline **scheduledRunTime** parameter:
+Azure Data Factory version 1 supports reading or writing partitioned data by using the system variables `SliceStart`, `SliceEnd`, `WindowStart`, and `WindowEnd`. In the current version of Data Factory and Azure Synapse pipelines, you can achieve this behavior by using a pipeline parameter. The start time and scheduled time for the trigger are set as the value for the pipeline parameter. In the following example, the scheduled time for the trigger is passed as a value to the pipeline `scheduledRunTime` parameter:
```json "parameters": {
The following JSON definition shows you how to create a schedule trigger with sc
``` > [!IMPORTANT]
-> The **parameters** property is a mandatory property of the **pipelines** element. If your pipeline doesn't take any parameters, you must include an empty JSON definition for the **parameters** property.
-
+> The `parameters` property is a mandatory property of the `pipelines` element. If your pipeline doesn't take any parameters, you must include an empty JSON definition for the `parameters` property.
### Schema overview
-The following table provides a high-level overview of the major schema elements that are related to recurrence and scheduling of a trigger:
+The following table provides a high-level overview of the major schema elements that are related to recurrence and scheduling of a trigger.
| JSON property | Description | |: |: |
-| **startTime** | A Date-Time value. For simple schedules, the value of the **startTime** property applies to the first occurrence. For complex schedules, the trigger starts no sooner than the specified **startTime** value. <br> For UTC time zone, format is `'yyyy-MM-ddTHH:mm:ssZ'`, for other time zone, format is `'yyyy-MM-ddTHH:mm:ss'`. |
-| **endTime** | The end date and time for the trigger. The trigger doesn't execute after the specified end date and time. The value for the property can't be in the past. This property is optional. <br> For UTC time zone, format is `'yyyy-MM-ddTHH:mm:ssZ'`, for other time zone, format is `'yyyy-MM-ddTHH:mm:ss'`. |
-| **timeZone** | The time zone the trigger is created in. This setting affects **startTime**, **endTime**, and **schedule**. See [list of supported time zone](#time-zone-option) |
-| **recurrence** | A recurrence object that specifies the recurrence rules for the trigger. The recurrence object supports the **frequency**, **interval**, **endTime**, **count**, and **schedule** elements. When a recurrence object is defined, the **frequency** element is required. The other elements of the recurrence object are optional. |
-| **frequency** | The unit of frequency at which the trigger recurs. The supported values include "minute," "hour," "day," "week," and "month." |
-| **interval** | A positive integer that denotes the interval for the **frequency** value, which determines how often the trigger runs. For example, if the **interval** is 3 and the **frequency** is "week," the trigger recurs every 3 weeks. |
-| **schedule** | The recurrence schedule for the trigger. A trigger with a specified **frequency** value alters its recurrence based on a recurrence schedule. The **schedule** property contains modifications for the recurrence that are based on minutes, hours, weekdays, month days, and week number.
+| `startTime` | A Date-Time value. For simple schedules, the value of the `startTime` property applies to the first occurrence. For complex schedules, the trigger starts no sooner than the specified `startTime` value. <br> For the UTC time zone, the format is `'yyyy-MM-ddTHH:mm:ssZ'`. For other time zones, the format is `yyyy-MM-ddTHH:mm:ss`. |
+| `endTime` | The end date and time for the trigger. The trigger doesn't execute after the specified end date and time. The value for the property can't be in the past. This property is optional. <br> For the UTC time zone, the format is `'yyyy-MM-ddTHH:mm:ssZ'`. For other time zones, the format is `yyyy-MM-ddTHH:mm:ss`. |
+| `timeZone` | The time zone in which the trigger is created. This setting affects `startTime`, `endTime`, and `schedule`. See a [list of supported time zones](#time-zone-option). |
+| `recurrence` | A recurrence object that specifies the recurrence rules for the trigger. The recurrence object supports the `frequency`, `interval`, `endTime`, `count`, and `schedule` elements. When a recurrence object is defined, the `frequency` element is required. The other elements of the recurrence object are optional. |
+| `frequency` | The unit of frequency at which the trigger recurs. The supported values include `minute,` `hour,` `day`, `week`, and `month`. |
+| `interval` | A positive integer that denotes the interval for the `frequency` value, which determines how often the trigger runs. For example, if the `interval` is `3` and the `frequency` is `week`, the trigger recurs every 3 weeks. |
+| `schedule` | The recurrence schedule for the trigger. A trigger with a specified `frequency` value alters its recurrence based on a recurrence schedule. The `schedule` property contains modifications for the recurrence that are based on minutes, hours, weekdays, month days, and week number.
> [!IMPORTANT]
-> For UTC timezone, the startTime and endTime need to follow format 'yyyy-MM-ddTHH:mm:ss**Z**', while for other timezones, startTime and endTime follow 'yyyy-MM-ddTHH:mm:ss'.
->
-> Per ISO 8601 standard, the _Z_ suffix to timestamp mark the datetime to UTC timezone, and render timeZone field useless. While missing _Z_ suffix for UTC time zone will result in an error upon trigger _activation_.
+> For the UTC time zone, `startTime` and `endTime` need to follow the format `yyyy-MM-ddTHH:mm:ss`**Z**. For other time zones, `startTime` and `endTime` follow the `yyyy-MM-ddTHH:mm:ss` format.
+>
+> Per the ISO 8601 standard, the _Z_ suffix is used to timestamp mark the datetime to the UTC time zone and render the `timeZone` field useless. If the _Z_ suffix is missing for the UTC time zone, the result is an error upon trigger _activation_.
### Schema defaults, limits, and examples | JSON property | Type | Required | Default value | Valid values | Example | |: |: |: |: |: |: |
-| **startTime** | String | Yes | None | ISO-8601 Date-Times | for UTC time zone `"startTime" : "2013-01-09T09:30:00-08:00Z"` <br> for other time zone `"2013-01-09T09:30:00-08:00"` |
-| **timeZone** | String | Yes | None | [Time Zone Values](#time-zone-option) | `"UTC"` |
-| **recurrence** | Object | Yes | None | Recurrence object | `"recurrence" : { "frequency" : "monthly", "interval" : 1 }` |
-| **interval** | Number | No | 1 | 1 to 1,000 | `"interval":10` |
-| **endTime** | String | Yes | None | A Date-Time value that represents a time in the future. | for UTC time zone `"endTime" : "2013-02-09T09:30:00-08:00Z"` <br> for other time zone `"endTime" : "2013-02-09T09:30:00-08:00"`|
-| **schedule** | Object | No | None | Schedule object | `"schedule" : { "minute" : [30], "hour" : [8,17] }` |
+| `startTime` | String | Yes | None | ISO-8601 Date-Times | For the UTC time zone: `"startTime" : "2013-01-09T09:30:00-08:00Z"` <br> For other time zones: `"2013-01-09T09:30:00-08:00"` |
+| `timeZone` | String | Yes | None | [Time zone values](#time-zone-option) | `"UTC"` |
+| `recurrence` | Object | Yes | None | Recurrence object | `"recurrence" : { "frequency" : "monthly", "interval" : 1 }` |
+| `interval` | Number | No | 1 | 1 to 1,000 | `"interval":10` |
+| `endTime` | String | Yes | None | Date-Time value that represents a time in the future | For the UTC time zone: `"endTime" : "2013-02-09T09:30:00-08:00Z"` <br> For other time zones: `"endTime" : "2013-02-09T09:30:00-08:00"`|
+| `schedule` | Object | No | None | Schedule object | `"schedule" : { "minute" : [30], "hour" : [8,17] }` |
### Time zone option
-Here are some of time zones supported for Schedule triggers:
+Here are some of the time zones supported for schedule triggers.
-| Time Zone | UTC Offset (Non-Daylight Saving) | timeZone Value | Observe Daylight Saving | Time Stamp Format |
+| Time zone | UTC offset (Non-daylight saving) | timeZone value | Observe daylight saving | Time stamp format |
| : | : | : | : | : | | Coordinated Universal Time | 0 | `UTC` | No | `'yyyy-MM-ddTHH:mm:ssZ'`| | Pacific Time (PT) | -8 | `Pacific Standard Time` | Yes | `'yyyy-MM-ddTHH:mm:ss'` |
Here are some of time zones supported for Schedule triggers:
| India Standard Time (IST) | +5:30 | `India Standard Time` | No | `'yyyy-MM-ddTHH:mm:ss'` | | China Standard Time | +8 | `China Standard Time` | No | `'yyyy-MM-ddTHH:mm:ss'` |
-This list is incomplete. For complete list of time zone options, explore in the portal [Trigger creation page](#azure-data-factory-and-synapse-portal-experience)
+This list is incomplete. For a complete list of time-zone options, see the [Trigger creation page](#azure-data-factory-and-azure-synapse-portal-experience) in the portal.
### startTime property
-The following table shows you how the **startTime** property controls a trigger run:
+
+The following table shows you how the `startTime` property controls a trigger run.
| startTime value | Recurrence without schedule | Recurrence with schedule | |: |: |: | | Start time in past | Calculates the first future execution time after the start time and runs at that time.<br/><br/>Runs subsequent executions based on calculating from the last execution time.<br/><br/>See the example that follows this table. | The trigger starts _no sooner than_ the specified start time. The first occurrence is based on the schedule that's calculated from the start time.<br/><br/>Runs subsequent executions based on the recurrence schedule. | | Start time in future or at present | Runs once at the specified start time.<br/><br/>Runs subsequent executions based on calculating from the last execution time. | The trigger starts _no sooner_ than the specified start time. The first occurrence is based on the schedule that's calculated from the start time.<br/><br/>Runs subsequent executions based on the recurrence schedule. |
-Let's see an example of what happens when the start time is in the past, with a recurrence, but no schedule. Assume that the current time is `2017-04-08 13:00`, the start time is `2017-04-07 14:00`, and the recurrence is every two days. (The **recurrence** value is defined by setting the **frequency** property to "day" and the **interval** property to 2.) Notice that the **startTime** value is in the past and occurs before the current time.
+Let's see an example of what happens when the start time is in the past, with a recurrence, but no schedule. Assume that the current time is `2017-04-08 13:00`, the start time is `2017-04-07 14:00`, and the recurrence is every two days. (The `recurrence` value is defined by setting the `frequency` property to `day` and the `interval` property to `2`.) Notice that the `startTime` value is in the past and occurs before the current time.
Under these conditions, the first execution is at `2017-04-09` at `14:00`. The Scheduler engine calculates execution occurrences from the start time. Any instances in the past are discarded. The engine uses the next instance that occurs in the future. In this scenario, the start time is `2017-04-07` at `2:00pm`, so the next instance is two days from that time, which is `2017-04-09` at `2:00pm`.
-The first execution time is the same even if the **startTime** value is `2017-04-05 14:00` or `2017-04-01 14:00`. After the first execution, subsequent executions are calculated by using the schedule. Therefore, the subsequent executions are at `2017-04-11` at `2:00pm`, then `2017-04-13` at `2:00pm`, then `2017-04-15` at `2:00pm`, and so on.
+The first execution time is the same even if the `startTime` value is `2017-04-05 14:00` or `2017-04-01 14:00`. After the first execution, subsequent executions are calculated by using the schedule. Therefore, the subsequent executions are at `2017-04-11` at `2:00pm`, then `2017-04-13` at `2:00pm`, then `2017-04-15` at `2:00pm`, and so on.
-Finally, when the hours or minutes arenΓÇÖt set in the schedule for a trigger, the hours or minutes of the first execution are used as the defaults.
+Finally, when the hours or minutes aren't set in the schedule for a trigger, the hours or minutes of the first execution are used as the defaults.
### schedule property
-On one hand, the use of a schedule can limit the number of trigger executions. For example, if a trigger with a monthly frequency is scheduled to run only on day 31, the trigger runs only in those months that have a 31st day.
+The use of a schedule can limit the number of trigger executions. For example, if a trigger with a monthly frequency is scheduled to run only on day 31, the trigger runs only in those months that have a 31st day.
-Whereas, a schedule can also expand the number of trigger executions. For example, a trigger with a monthly frequency that's scheduled to run on month days 1 and 2, runs on the 1st and 2nd days of the month, rather than once a month.
+A schedule can also expand the number of trigger executions. For example, a trigger with a monthly frequency that's scheduled to run on month days 1 and 2, runs on the first and second days of the month, rather than once a month.
-If multiple **schedule** elements are specified, the order of evaluation is from the largest to the smallest schedule setting. The evaluation starts with week number, and then month day, weekday, hour, and finally, minute.
+If multiple `schedule` elements are specified, the order of evaluation is from the largest to the smallest schedule setting. The evaluation starts with the week number, and then the month day, weekday, hour, and finally, minute.
-The following table describes the **schedule** elements in detail:
+The following table describes the `schedule` elements in detail.
| JSON element | Description | Valid values | |: |: |: |
-| **minutes** | Minutes of the hour at which the trigger runs. | <ul><li>Integer</li><li>Array of integers</li></ul>
-| **hours** | Hours of the day at which the trigger runs. | <ul><li>Integer</li><li>Array of integers</li></ul> |
-| **weekDays** | Days of the week on which the trigger runs. The value can be specified with a weekly frequency only. | <ul><li>Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday</li><li>Array of day values (maximum array size is 7)</li><li>Day values are not case-sensitive</li></ul> |
-| **monthlyOccurrences** | Days of the month on which the trigger runs. The value can be specified with a monthly frequency only. | <ul><li>Array of **monthlyOccurrence** objects: `{ "day": day, "occurrence": occurrence }`.</li><li>The **day** attribute is the day of the week on which the trigger runs. For example, a **monthlyOccurrences** property with a **day** value of `{Sunday}` means every Sunday of the month. The **day** attribute is required.</li><li>The **occurrence** attribute is the occurrence of the specified **day** during the month. For example, a **monthlyOccurrences** property with **day** and **occurrence** values of `{Sunday, -1}` means the last Sunday of the month. The **occurrence** attribute is optional.</li></ul> |
-| **monthDays** | Day of the month on which the trigger runs. The value can be specified with a monthly frequency only. | <ul><li>Any value <= -1 and >= -31</li><li>Any value >= 1 and <= 31</li><li>Array of values</li></ul> |
+| `minutes` | Minutes of the hour at which the trigger runs. | <ul><li>Integer</li><li>Array of integers</li></ul>
+| `hours` | Hours of the day at which the trigger runs. | <ul><li>Integer</li><li>Array of integers</li></ul> |
+| `weekDays` | Days of the week on which the trigger runs. The value can be specified with a weekly frequency only. | <ul><li>Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday.</li><li>Array of day values (maximum array size is 7).</li><li>Day values aren't case sensitive.</li></ul> |
+| `monthlyOccurrences` | Days of the month on which the trigger runs. The value can be specified with a monthly frequency only. | <ul><li>Array of `monthlyOccurrences` objects: `{ "day": day, "occurrence": occurrence }`.</li><li>The `day` attribute is the day of the week on which the trigger runs. For example, a `monthlyOccurrences` property with a `day` value of `{Sunday}` means every Sunday of the month. The `day` attribute is required.</li><li>The `occurrence` attribute is the occurrence of the specified `day` during the month. For example, a `monthlyOccurrences` property with `day` and `occurrence` values of `{Sunday, -1}` means the last Sunday of the month. The `occurrence` attribute is optional.</li></ul> |
+| `monthDays` | Day of the month on which the trigger runs. The value can be specified with a monthly frequency only. | <ul><li>Any value <= -1 and >= -31</li><li>Any value >= 1 and <= 31</li><li>Array of values</li></ul> |
## Examples of trigger recurrence schedules
-This section provides examples of recurrence schedules and focuses on the **schedule** object and its elements.
+This section provides examples of recurrence schedules and focuses on the `schedule` object and its elements.
-The examples assume that the **interval** value is 1, and that the **frequency** value is correct according to the schedule definition. For example, you can't have a **frequency** value of "day" and also have a "monthDays" modification in the **schedule** object. Restrictions such as these are mentioned in the table in the previous section.
+The examples assume that the `interval` value is `1` and that the `frequency` value is correct according to the schedule definition. For example, you can't have a `frequency` value of `day` and also have a `monthDays` modification in the `schedule` object. Restrictions such as these are mentioned in the table in the previous section.
| Example | Description | |: |: |
The examples assume that the **interval** value is 1, and that the **frequency**
| `{"minutes":[15], "hours":[5,17]}` | Run at 5:15 AM and 5:15 PM every day. | | `{"minutes":[15,45], "hours":[5,17]}` | Run at 5:15 AM, 5:45 AM, 5:15 PM, and 5:45 PM every day. | | `{"minutes":[0,15,30,45]}` | Run every 15 minutes. |
-| `{hours":[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]}` | Run every hour. This trigger runs every hour. The minutes are controlled by the **startTime** value, when a value is specified. If a value not specified, the minutes are controlled by the creation time. For example, if the start time or creation time (whichever applies) is 12:25 PM, the trigger runs at 00:25, 01:25, 02:25, ..., and 23:25.<br/><br/>This schedule is equivalent to having a trigger with a **frequency** value of "hour," an **interval** value of 1, and no **schedule**. This schedule can be used with different **frequency** and **interval** values to create other triggers. For example, when the **frequency** value is "month," the schedule runs only once a month, rather than every day, when the **frequency** value is "day." |
-| `{"minutes":[0]}` | Run every hour on the hour. This trigger runs every hour on the hour starting at 12:00 AM, 1:00 AM, 2:00 AM, and so on.<br/><br/>This schedule is equivalent to a trigger with a **frequency** value of "hour" and a **startTime** value of zero minutes, or no **schedule** but a **frequency** value of "day." If the **frequency** value is "week" or "month," the schedule executes one day a week or one day a month only, respectively. |
+| `{hours":[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]}` | Run every hour. This trigger runs every hour. The minutes are controlled by the `startTime` value, when a value is specified. If a value isn't specified, the minutes are controlled by the creation time. For example, if the start time or creation time (whichever applies) is 12:25 PM, the trigger runs at 00:25, 01:25, 02:25, ..., and 23:25.<br/><br/>This schedule is equivalent to having a trigger with a `frequency` value of `hour`, an `interval` value of `1`, and no `schedule`. This schedule can be used with different `frequency` and `interval` values to create other triggers. For example, when the `frequency` value is `month`, the schedule runs only once a month, rather than every day, when the `frequency` value is `day`. |
+| `{"minutes":[0]}` | Run every hour on the hour. This trigger runs every hour on the hour starting at 12:00 AM, 1:00 AM, 2:00 AM, and so on.<br/><br/>This schedule is equivalent to a trigger with a `frequency` value of `hour` and a `startTime` value of zero minutes, or no `schedule` but a `frequency` value of `day`. If the `frequency` value is `week` or `month`, the schedule executes one day a week or one day a month only, respectively. |
| `{"minutes":[15]}` | Run at 15 minutes past every hour. This trigger runs every hour at 15 minutes past the hour starting at 00:15 AM, 1:15 AM, 2:15 AM, and so on, and ending at 11:15 PM. | | `{"hours":[17], "weekDays":["saturday"]}` | Run at 5:00 PM on Saturdays every week. | | `{"hours":[17], "weekDays":["monday", "wednesday", "friday"]}` | Run at 5:00 PM on Monday, Wednesday, and Friday every week. |
The examples assume that the **interval** value is 1, and that the **frequency**
| `{"minutes":[0,15,30,45], "weekDays":["monday", "tuesday", "wednesday", "thursday", "friday"]}` | Run every 15 minutes on weekdays. | | `{"minutes":[0,15,30,45], "hours": [9, 10, 11, 12, 13, 14, 15, 16] "weekDays":["monday", "tuesday", "wednesday", "thursday", "friday"]}` | Run every 15 minutes on weekdays between 9:00 AM and 4:45 PM. | | `{"weekDays":["tuesday", "thursday"]}` | Run on Tuesdays and Thursdays at the specified start time. |
-| `{"minutes":[0], "hours":[6], "monthDays":[28]}` | Run at 6:00 AM on the 28th day of every month (assuming a **frequency** value of "month"). |
+| `{"minutes":[0], "hours":[6], "monthDays":[28]}` | Run at 6:00 AM on the 28th day of every month (assuming a `frequency` value of `month`). |
| `{"minutes":[0], "hours":[6], "monthDays":[-1]}` | Run at 6:00 AM on the last day of the month. To run a trigger on the last day of a month, use -1 instead of day 28, 29, 30, or 31. | | `{"minutes":[0], "hours":[6], "monthDays":[1,-1]}` | Run at 6:00 AM on the first and last day of every month. | | `{monthDays":[1,14]}` | Run on the first and 14th day of every month at the specified start time. |
The examples assume that the **interval** value is 1, and that the **frequency**
| `{"monthlyOccurrences":[{"day":"friday", "occurrence":-3}]}` | Run on the third Friday from the end of the month, every month, at the specified start time. | | `{"minutes":[15], "hours":[5], "monthlyOccurrences":[{"day":"friday", "occurrence":1},{"day":"friday", "occurrence":-1}]}` | Run on the first and last Friday of every month at 5:15 AM. | | `{"monthlyOccurrences":[{"day":"friday", "occurrence":1},{"day":"friday", "occurrence":-1}]}` | Run on the first and last Friday of every month at the specified start time. |
-| `{"monthlyOccurrences":[{"day":"friday", "occurrence":5}]}` | Run on the fifth Friday of every month at the specified start time. When there's no fifth Friday in a month, the pipeline doesn't run, since it's scheduled to run only on fifth Fridays. To run the trigger on the last occurring Friday of the month, consider using -1 instead of 5 for the **occurrence** value. |
+| `{"monthlyOccurrences":[{"day":"friday", "occurrence":5}]}` | Run on the fifth Friday of every month at the specified start time. When there's no fifth Friday in a month, the pipeline doesn't run because it's scheduled to run only on fifth Fridays. To run the trigger on the last occurring Friday of the month, consider using -1 instead of 5 for the `occurrence` value. |
| `{"minutes":[0,15,30,45], "monthlyOccurrences":[{"day":"friday", "occurrence":-1}]}` | Run every 15 minutes on the last Friday of the month. | | `{"minutes":[15,45], "hours":[5,17], "monthlyOccurrences":[{"day":"wednesday", "occurrence":3}]}` | Run at 5:15 AM, 5:45 AM, 5:15 PM, and 5:45 PM on the third Wednesday of every month. | ## Related content -- For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).-- Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
+- For more information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).
+- To learn how to reference trigger metadata in pipeline, see [Reference trigger metadata in pipeline runs](how-to-use-trigger-parameterization.md).
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md
You can also rerun a canceled window. The rerun will take the _latest_ published
This section shows you how to use Azure PowerShell to create, start, and monitor a trigger. ### Prerequisites
data-factory How To Invoke Ssis Package Ssis Activity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity-powershell.md
This article describes how to run a SQL Server Integration Services (SSIS) packa
## Prerequisites Create an Azure-SSIS integration runtime (IR) if you don't have one already by following the step-by-step instructions in the [Tutorial: Provisioning Azure-SSIS IR](./tutorial-deploy-ssis-packages-azure.md).
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md
In this section, you trigger a pipeline run and then monitor it.
## Azure PowerShell In this section, you use Azure PowerShell to create a Data Factory pipeline with a stored procedure activity that invokes an SSIS package.
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
Alternatively, you can create web activities in Data Factory or Azure Synapse An
You can also chain an Execute SSIS Package activity between two web activities that start and stop your IR. Your IR will then start and stop on demand, before or after your package execution. For more information about the Execute SSIS Package activity, see [Run an SSIS package with the Execute SSIS Package activity in the Azure portal](how-to-invoke-ssis-package-ssis-activity.md). ## Prerequisites
data-factory Manage Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/manage-azure-ssis-integration-runtime.md
On **Manage** hub, switch to the **Integration runtimes** page.
## Azure PowerShell After you provision and start an instance of Azure-SSIS integration runtime, you can reconfigure it by running a sequence of `Stop` - `Set` - `Start` PowerShell cmdlets consecutively. For example, the following PowerShell script changes the number of nodes allocated for the Azure-SSIS integration runtime instance to five.
data-factory Monitor Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-integration-runtime.md
- Self-hosted integration runtime - Azure-SQL Server Integration Services (SSIS) integration runtime To get the status of an instance of integration runtime (IR), run the following PowerShell command:
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-programmatically.md
This article describes how to monitor a pipeline in a data factory by using different software development kits (SDKs). ## Data range
data-factory Quickstart Create Data Factory Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-bicep.md
Last updated 05/15/2024
This quickstart describes how to use Bicep to create an Azure data factory. The pipeline you create in this data factory **copies** data from one folder to another folder in an Azure blob storage. For a tutorial on how to **transform** data using Azure Data Factory, see [Tutorial: Transform data using Spark](transform-data-using-spark.md). > [!NOTE] > This article does not provide a detailed introduction of the Data Factory service. For an introduction to the Azure Data Factory service, see [Introduction to Azure Data Factory](introduction.md).
data-factory Quickstart Create Data Factory Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-powershell.md
This quickstart describes how to use PowerShell to create an Azure Data Factory.
### Azure PowerShell Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
Last updated 10/20/2023
This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure data factory. The pipeline you create in this data factory **copies** data from one folder to another folder in an Azure blob storage. For a tutorial on how to **transform** data using Azure Data Factory, see [Tutorial: Transform data using Spark](transform-data-using-spark.md). > [!NOTE] > This article does not provide a detailed introduction of the Data Factory service. For an introduction to the Azure Data Factory service, see [Introduction to Azure Data Factory](introduction.md).
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Prerequisites * **Azure subscription**. If you don't have a subscription, you can create a [free trial](https://azure.microsoft.com/pricing/free-trial/) account. * **Azure Storage account**. You use the blob storage as **source** and **sink** data store. If you don't have an Azure storage account, see the [Create a storage account](../storage/common/storage-account-create.md) article for steps to create one.
data-factory Bulk Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/bulk-copy-powershell.md
Last updated 01/05/2024
This sample PowerShell script copies data from multiple tables in Azure SQL Database to Azure Synapse Analytics. [!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)]
data-factory Copy Azure Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/copy-azure-blob-powershell.md
Last updated 01/05/2024
This sample PowerShell script creates a pipeline in Azure Data Factory that copies data from one location to another location in an Azure Blob Storage. [!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)]
data-factory Deploy Azure Ssis Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/deploy-azure-ssis-integration-runtime-powershell.md
Last updated 10/20/2023
This sample PowerShell script creates an Azure-SSIS integration runtime that can run your SSIS packages in Azure. [!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)]
data-factory Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/hybrid-copy-powershell.md
Last updated 01/05/2024
This sample PowerShell script creates a pipeline in Azure Data Factory that copies data from a SQL Server database to an Azure Blob Storage. [!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)]
data-factory Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/incremental-copy-powershell.md
Last updated 01/05/2024
This sample PowerShell script loads only new or updated records from a source data store to a sink data store after the initial full copy of data from the source to the sink. [!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)]
data-factory Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/transform-data-spark-powershell.md
Last updated 01/05/2024
This sample PowerShell script creates a pipeline that transforms data in the cloud by running Spark program on an Azure HDInsight Spark cluster. [!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)]
data-factory Transform Data Using Custom Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-custom-activity.md
There are two types of activities that you can use in an Azure Data Factory or S
To move data to/from a data store that the service does not support, or to transform/process data in a way that isn't supported by the service, you can create a **Custom activity** with your own data movement or transformation logic and use the activity in a pipeline. The custom activity runs your customized code logic on an **Azure Batch** pool of virtual machines. See following articles if you are new to Azure Batch service:
data-factory Tutorial Bulk Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Prerequisites * **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). * **Azure Storage account**. The Azure Storage account is used as staging blob storage in the bulk copy operation.
data-factory Tutorial Deploy Ssis Packages Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md
In this tutorial, you will:
## Prerequisites - **Azure subscription**. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
data-factory Tutorial Deploy Ssis Packages Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
In this tutorial, you complete the following steps:
## Prerequisites - **Azure subscription**. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
data-factory Tutorial Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-powershell.md
In this section, you create a blob container named **adftutorial** in your Azure
#### Install Azure PowerShell Install the latest version of Azure PowerShell if you don't already have it on your machine. For detailed instructions, see [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
data-factory Tutorial Incremental Copy Change Tracking Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md
In this tutorial, you create two pipelines that perform the following operations
* **Azure SQL Database**. You use a database in Azure SQL Database as the *source* data store. If you don't have one, see [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) for steps to create it. * **Azure storage account**. You use Blob Storage as the *sink* data store. If you don't have an Azure storage account, see [Create a storage account](../storage/common/storage-account-create.md) for steps to create one. Create a container named *adftutorial*. ## Create a data source table in Azure SQL Database
data-factory Tutorial Incremental Copy Change Tracking Feature Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-powershell.md
You perform the following steps in this tutorial:
> * Add or update data in the source table > * Create, run, and monitor the incremental copy pipeline ## Overview In a data integration solution, incrementally loading data after initial data loads is a widely used scenario. In some cases, the changed data within a period in your source data store can be easily to sliced up (for example, LastModifyTime, CreationTime). In some cases, there is no explicit way to identify the delta data from last time you processed the data. The Change Tracking technology supported by data stores such as Azure SQL Database and SQL Server can be used to identify the delta data. This tutorial describes how to use Azure Data Factory with SQL Change Tracking technology to incrementally load delta data from Azure SQL Database into Azure Blob Storage. For more concrete information about SQL Change Tracking technology, see [Change tracking in SQL Server](/sql/relational-databases/track-changes/about-change-tracking-sql-server).
data-factory Tutorial Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-powershell.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Prerequisites * **Azure SQL Database**. You use the database as the source data store. If you don't have a database in Azure SQL Database, see [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) for steps to create one. * **Azure Storage**. You use the blob storage as the sink data store. If you don't have a storage account, see [Create a storage account](../storage/common/storage-account-create.md) for steps to create one. Create a container named adftutorial.
data-factory Tutorial Transform Data Hive Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network-portal.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Prerequisites - **Azure Storage account**. You create a hive script, and upload it to the Azure storage. The output from the Hive script is stored in this storage account. In this sample, HDInsight cluster uses this Azure Storage account as the primary storage. - **Azure Virtual Network.** If you don't have an Azure virtual network, create it by following [these instructions](../virtual-network/quick-create-portal.md). In this sample, the HDInsight is in an Azure Virtual Network. Here is a sample configuration of Azure Virtual Network.
data-factory Tutorial Transform Data Hive Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Prerequisites - **Azure Storage account**. You create a hive script, and upload it to the Azure storage. The output from the Hive script is stored in this storage account. In this sample, HDInsight cluster uses this Azure Storage account as the primary storage. - **Azure Virtual Network.** If you don't have an Azure virtual network, create it by following [these instructions](../virtual-network/quick-create-portal.md). In this sample, the HDInsight is in an Azure Virtual Network. Here is a sample configuration of Azure Virtual Network.
data-factory Tutorial Transform Data Spark Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites * **Azure storage account**. You create a Python script and an input file, and you upload them to Azure Storage. The output from the Spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
data-factory Tutorial Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-powershell.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Prerequisites * **Azure Storage account**. You create a Python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage. * **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
data-share Share Your Data Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-arm.md
Learn how to set up a new Azure Data Share from an Azure storage account by using an Azure Resource Manager template (ARM template). And, start sharing your data with customers and partners outside of your Azure organization. For a list of the supported data stores, see [Supported data stores in Azure Data Share](./supported-data-stores.md). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
data-share Share Your Data Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-bicep.md
Learn how to set up a new Azure Data Share from an Azure storage account using Bicep, and start sharing your data with customers and partners outside of your Azure organization. For a list of the supported data stores, see [Supported data stores in Azure Data Share](./supported-data-stores.md). ## Prerequisites
data-share Share Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data.md
Create an Azure Data Share resource in an Azure resource group.
Start by preparing your environment for PowerShell. You can either run PowerShell commands locally or using the Bash environment in the Azure Cloud Shell. :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
data-share Subscribe To Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/subscribe-to-data-share.md
Copy your invitation ID for use in the next section.
Start by preparing your environment for PowerShell. You can either run PowerShell commands locally or using the Bash environment in the Azure Cloud Shell. :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
Previously updated : 05/24/2022 Last updated : 06/26/2024 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
Before you can use Azure Marketplace images for Azure Stack Edge, make sure you'
## Search for Azure Marketplace images
-You'll now identify a specific Azure Marketplace image that you wish to use. Azure Marketplace hosts thousands of VM images.
+Identify a specific Azure Marketplace image that you wish to use. Azure Marketplace hosts thousands of VM images.
To find some of the most commonly used Marketplace images that match your search criteria, run the following command.
az vm image list --all --location "westus" --publisher "MicrosoftWindowsserver"
az vm image list --all --publisher "Canonical" ```
-Here is an example output when VM images of a certain publisher, offer, and SKU were queried.
+Here's an example output when VM images of a certain publisher, offer, and SKU were queried.
```azurecli PS /home/user> az vm image list --all --publisher "Canonical" --offer "UbuntuServer" --sku "12.04.4-LTS"
PS /home/user> az vm image list --all --publisher "Canonical" --offer "UbuntuSer
PS /home/user> ```
-In this example, we will select Windows Server 2019 Datacenter Core, version 2019.0.20190410. We will identify this image by its Universal Resource Number (ΓÇ£URNΓÇ¥).
+In this example, we'll select Windows Server 2019 Datacenter Core, version 2019.0.20190410. We'll identify this image by its Universal Resource Number (ΓÇ£URNΓÇ¥).
:::image type="content" source="media/azure-stack-edge-create-virtual-machine-marketplace-image/marketplace-image-1.png" alt-text="List of marketplace images"::: ### Commonly used Marketplace images
-Below is a list of URNs for some of the most commonly used images. If you just want the latest version of a particular OS, the version number can be replaced with ΓÇ£latestΓÇ¥ in the URN. For example, ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥.
+Below is a list of URNs for some of the most commonly used images. If you just want the latest version of a particular OS, the version number can be replaced with "latest" in the URN. For example, ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥.
| OS | SKU | Version | URN | |--|--|--|-| | Windows Server | 2019 Datacenter | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter:17763.1879.2104091832 |
-| Windows Server | 2019 Datacenter (30 GB small disk) | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter-smalldisk:17763.1879.2104091832 |
+| Windows Server | 2019 Datacenter (30-GB small disk) | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter-smalldisk:17763.1879.2104091832 |
| Windows Server | 2019 Datacenter Core | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter-Core:17763.1879.2104091832 |
-| Windows Server | 2019 Datacenter Core (30 GB small disk) | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter-Core-smalldisk:17763.1879.2104091832 |
+| Windows Server | 2019 Datacenter Core (30-GB small disk) | 17763.1879.2104091832 | MicrosoftWindowsServer:WindowsServer:2019-Datacenter-Core-smalldisk:17763.1879.2104091832 |
| Windows Desktop | Windows 10 20H2 Pro | 19042.928.2104091209 | MicrosoftWindowsDesktop:Windows-10:20h2-pro:19042.928.2104091209 | | Ubuntu Server | Canonical Ubuntu Server 18.04 LTS | 18.04.202002180 | Canonical:UbuntuServer:18.04-LTS:18.04.202002180 | | Ubuntu Server | Canonical Ubuntu Server 16.04 LTS | 16.04.202104160 | Canonical:UbuntuServer:16.04-LTS:16.04.202104160 | | CentOS | CentOS 8.1 | 8.1.2020062400 | OpenLogic:CentOS:8_1:8.1.2020062400 |
-| CentOS | CentOS 7.7 | 7.7.2020062400 | OpenLogic:CentOS:7.7:7.7.2020062400 |
-- ## Create a new managed disk from the Marketplace image
Create an Azure Managed Disk from your chosen Marketplace image.
$diskAccessSAS = ($sas | ConvertFrom-Json)[0].accessSas ```
-Here is an example output:
+Here's an example output:
```output PS /home/user> $urn = ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥
PS /home/user>
## Export a VHD from the managed disk to Azure Storage
-This step will export a VHD from the managed disk to your preferred Azure blob storage account. This VHD can then be used to create VM images on Azure Stack Edge.
+This step exports a VHD from the managed disk to your preferred Azure blob storage account. This VHD can then be used to create VM images on Azure Stack Edge.
-1. Set the destination storage account where the VHD will be copied.
+1. Set the destination storage account where the VHD is copied.
```azurecli $storageAccountName = <destination storage account name>
This step will export a VHD from the managed disk to your preferred Azure blob s
Start-AzureStorageBlobCopy -AbsoluteUri $diskAccessSAS -DestContainer $containerName -DestContext $destContext -DestBlob $destBlobName ```
- The VHD copy will take several minutes to complete. Ensure the copy has completed before proceeding by running the following command. The status field will show ΓÇ£SuccessΓÇ¥ when complete.
+ The VHD copy takes several minutes to complete. Ensure the copy completes before proceeding by running the following command. The status field shows ΓÇ£SuccessΓÇ¥ when complete.
```azurecli Get-AzureStorageBlobCopyState ΓÇôContainer $containerName ΓÇôContext $destContext -Blob $destBlobName ```
-Here is an example output:
+Here's an example output:
```output PS /home/user> $storageAccountName = "edgeazurevmeus"
databox-online Azure Stack Edge Gpu System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-system-requirements.md
Previously updated : 04/18/2024 Last updated : 06/26/2024
databox-online Azure Stack Edge Mini R System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-system-requirements.md
Previously updated : 02/05/2021 Last updated : 06/26/2024 # Azure Stack Edge Mini R system requirements
databox-online Azure Stack Edge Pro 2 System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-system-requirements.md
Previously updated : 06/02/2023 Last updated : 06/26/2024
databox-online Azure Stack Edge Pro R System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-system-requirements.md
Previously updated : 02/05/2021 Last updated : 06/26/2024 # Azure Stack Edge Pro R system requirements
databox Data Box Disk Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-security.md
The Data Box service is protected by the following features.
## Managing personal data Azure Data Box Disk collects and displays personal information in the following key instances in the service:
databox Data Box How To Set Data Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-how-to-set-data-tier.md
Azure Data Box moves large amounts of data to Azure by shipping you a proprietar
This article describes how the data that is uploaded by Data Box can be moved to a Hot, Cool, or Archive blob tier. This article applies to both Azure Data Box and Azure Data Box Heavy. ## Choose the correct storage tier for your data
databox Data Box Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-security.md
Data Box provides a secure solution for data protection by ensuring that only authorized entities can view, modify, or delete your data. This article describes the Azure Data Box security features that help protect each of the Data Box solution components and the data stored on them. ## Data flow through components
ddos-protection Manage Ddos Ip Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-ip-protection-cli.md
In this QuickStart, you'll enable DDoS IP protection and link it to a public IP
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure CLI installed locally or Azure Cloud Shell If you choose to install and use the CLI locally, this quickstart requires Azure CLI version 2.0.56 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
ddos-protection Manage Ddos Ip Protection Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-ip-protection-template.md
In this QuickStart, you'll learn how to use an Azure Resource Manager template (
:::image type="content" source="./media/manage-ddos-ip-protection-portal/ddos-ip-protection-diagram.png" alt-text="Diagram of DDoS IP Protection protecting the Public IP address." lightbox="./media/manage-ddos-ip-protection-portal/ddos-ip-protection-diagram.png"::: If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
ddos-protection Manage Ddos Protection Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-bicep.md
This QuickStart describes how to use Bicep to create a distributed denial of ser
:::image type="content" source="./media/manage-ddos-protection/ddos-network-protection-diagram-simple.png" alt-text="Diagram of DDoS Network Protection." lightbox="./media/manage-ddos-protection/ddos-network-protection-diagram-simple.png"::: ## Prerequisites
ddos-protection Manage Ddos Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-cli.md
In this QuickStart, you'll create a DDoS protection plan and link it to a virtua
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure CLI installed locally or Azure Cloud Shell If you choose to install and use the CLI locally, this quickstart requires Azure CLI version 2.0.56 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
ddos-protection Manage Ddos Protection Powershell Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell-ip.md
In this QuickStart, you'll enable DDoS IP protection and link it to a public IP
- Azure PowerShell installed locally or Azure Cloud Shell - If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 9.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. ## Enable DDoS IP Protection for a public IP address
ddos-protection Manage Ddos Protection Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell.md
In this QuickStart, you'll create a DDoS protection plan and link it to a virtua
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell ## Create a DDoS Protection plan
ddos-protection Manage Ddos Protection Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-template.md
This QuickStart describes how to use an Azure Resource Manager template (ARM tem
:::image type="content" source="./media/manage-ddos-protection/ddos-network-protection-diagram-simple.png" alt-text="Diagram of DDoS Network Protection." lightbox="./media/manage-ddos-protection/ddos-network-protection-diagram-simple.png"::: If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
dedicated-hsm Quickstart Create Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-create-hsm-powershell.md
This article describes how you can create an Azure Dedicated HSM using the
* If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. > [!IMPORTANT] > While the **Az.DedicatedHsm** PowerShell module is in preview, you must install it separately
dedicated-hsm Tutorial Deploy Hsm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/tutorial-deploy-hsm-powershell.md
A typical, high availability, multi-region deployment architecture is as follows
This tutorial focuses on a pair of HSMs and the required [ExpressRoute gateway](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md) (see Subnet 1 above) being integrated into an existing virtual network (see VNET 1 above). All other resources are standard Azure resources. The same integration process can be used for HSMs in subnet 4 on VNET 3 above. ## Prerequisites
defender-for-cloud Apply Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md
Last updated 06/27/2023
# Review hardening recommendations > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) as of June 30, 2024. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
> [!NOTE] > As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
Use the security recommendations described in this article to assess the machine
|-|:-| |Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]| |Pricing:|Free|
-|Prerequisites:|Machines must (1) be members of a workgroup, (2) have the Guest Configuration extension, (3) have a system-assigned managed-identity, and (4) be running a supported OS:<br>ΓÇó Windows Server 2012, 2012r2, 2016 or 2019<br>ΓÇó Ubuntu 14.04, 16.04, 17.04, 18.04 or 20.04<br>ΓÇó Debian 7, 8, 9, or 10<br>ΓÇó CentOS 7 or 8<br>ΓÇó Red Hat Enterprise Linux (RHEL) 7 or 8<br>ΓÇó Oracle Linux 7 or 8<br>ΓÇó SUSE Linux Enterprise Server 12|
+|Prerequisites:|Machines must (1) be members of a workgroup, (2) have the Guest Configuration extension, (3) have a system-assigned managed-identity, and (4) be running a supported OS:<br>ΓÇó Windows Server 2012, 2012r2, 2016 or 2019<br>ΓÇó Ubuntu 14.04, 16.04, 17.04, 18.04 or 20.04<br>ΓÇó Debian 7, 8, 9, or 10<br>ΓÇó CentOS 7 or 8 (CentOS is End Of Life (EOL) as of June 30, 2024. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).)<br>ΓÇó Red Hat Enterprise Linux (RHEL) 7 or 8<br>ΓÇó Oracle Linux 7 or 8<br>ΓÇó SUSE Linux Enterprise Server 12|
|Required roles and permissions:|To install the Guest Configuration extension and its prerequisites, **write** permission is required on the relevant machines.<br>To **view** the recommendations and explore the OS baseline data, **read** permission is required at the subscription level.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)|
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The following table summarizes each plan and their cloud availability.
| [ServiceNow Integration](integration-servicenow.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Critical assets protection](critical-assets-protection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Governance to drive remediation at-scale](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
-| [Data security posture management, Sensitive data scanning](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+| [Data security posture management (DSPM), Sensitive data scanning](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP* |
| [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Agentless code-to-cloud containers vulnerability assessment](agentless-vulnerability-assessment-azure.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP |
+(*) In GCP sensitive data discovery [only supports Cloud Storage](concept-data-security-posture-prepare.md#whats-supported).
+ > [!NOTE] > Starting March 7, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
defender-for-cloud Defender For Storage Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic.md
For more clarification about Defender for Storage (classic), see the [commonly a
|Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for Storage (classic)** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)| |Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../storage/files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
## What are the benefits of Microsoft Defender for Storage (classic)?
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Microsoft Defender for Storage provides comprehensive security by analyzing the
Defender for Storage includes: - Activity Monitoring-- Sensitive data threat detection (preview feature, new plan only)
+- Sensitive data threat detection (new plan only)
+ - Malware Scanning (new plan only) :::image type="content" source="media/defender-for-storage-introduction/defender-for-storage-overview.gif" alt-text="Animated diagram showing how Defender for Storage protects against common threats to data.":::
With a simple agentless setup at scale, you can [enable Defender for Storage](tu
|Aspect|Details| |-|:-| |Release state:|General Availability (GA)|
-|Feature availability:|- Activity monitoring (security alerts) ΓÇô General Availability (GA)<br>- Malware Scanning ΓÇô General Availability (GA)<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview|
+|Feature availability:|- Activity monitoring (security alerts) ΓÇô General Availability (GA)<br>- Malware Scanning ΓÇô General Availability (GA)<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô General Availability (GA)|
|Pricing:|**Microsoft Defender for Storage** pricing applies to commercial clouds. Learn more about [pricing and availability per region.](https://azure.microsoft.com/pricing/details/defender-for-cloud/)<br>| |<br><br> Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring | |Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.|
-|Clouds:|:::image type="icon" source="../defender-for-cloud/medi))<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts|
+|Clouds:|:::image type="icon" source="../defender-for-cloud/medi))<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts|
-\* Azure DNS Zone isn't supported for Malware Scanning and sensitive data threat detection.
+\* Azure DNS Zone isn't supported for malware scanning and sensitive data threat detection.
## What are the benefits of Microsoft Defender for Storage?
defender-for-cloud Incidents Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents-reference.md
Title: Reference table for all incidents
-description: This article lists the incidents visible in Microsoft Defender for Cloud
+description: This article lists the incidents visible in Microsoft Defender for Cloud and provides information on managing security incidents.
Previously updated : 10/15/2023 Last updated : 06/26/2024 # Incidents - a reference guide
Learn how to [manage security incidents](incidents.md#managing-security-incident
| Alert | Description | Severity | |--|--|--|
-| **Security incident detected suspicious virtual machines activity** | This incident indicates suspicious activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered revealing a similar pattern on your virtual machines. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
-| **Security incident detected suspicious source IP activity** | This incident indicates that suspicious activity has been detected on the same source IP. Multiple alerts from different Defender for Cloud plans have been triggered on the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious activity on the same IP address might indicate that an attacker has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
-| **Security incident detected on multiple resources** | This incident indicates that suspicious activity had been detected on your cloud resources. Multiple alerts from different Defender for Cloud plan have been triggered, revealing similar attack methods were performed on your cloud resources. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
| **Security incident detected suspicious user activity (Preview)** | This incident indicates suspicious user operations in your environment. Multiple alerts from different Defender for Cloud plans have been triggered by this user, which increases the fidelity of malicious activity in your environment. While this activity may be legitimate, a threat actor might utilize such operations to compromise resources in your environment. This might indicate that the account is compromised and is being used with malicious intent. | High | | **Security incident detected suspicious service principal activity (Preview)** | This incident indicates suspicious service principal operations in your environment. Multiple alerts from different Defender for Cloud plans have been triggered by this service principal, which increases the fidelity of malicious activity in your environment. While this activity may be legitimate, a threat actor might utilize such operations to compromise resources in your environment. This might indicate that the service principal is compromised and is being used with malicious intent. | High | | **Security incident detected suspicious crypto mining activity (Preview)** | Scenario 1: This incident indicates that suspicious crypto mining activity has been detected following suspicious user or service principal activity. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious account activity might indicate a threat actor gained unauthorized access to your environment, and the succeeding crypto mining activity may suggest that they successfully compromised your resource and are using it for mining cryptocurrencies, which can lead to increased costs for your organization. <br><br> Scenario 2: This incident indicates that suspicious crypto mining activity has been detected following a brute force attack on the same virtual machine resource. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. The brute force attack on the virtual machine might indicate that a threat actor is attempting to gain unauthorized access to your environment, and the succeeding crypto mining activity may suggest they successfully compromised your resource and using it for mining cryptocurrencies, which can lead to increased costs for your organization. | High |
Learn how to [manage security incidents](incidents.md#managing-security-incident
|**Security incident detected suspicious DNS activity (Preview)** | Scenario 1: This incident indicates that suspicious DNS activity has been detected. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious DNS activity might indicate that a threat actor gained unauthorized access to your environment and is attempting to compromise it. <br><br> Scenario 2: This incident indicates that suspicious DNS activity has been detected. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious DNS activity might indicate that a threat actor gained unauthorized access to your environment and is attempting to compromise it. | Medium | |**Security incident detected suspicious SQL activity (Preview)** | Scenario 1: This incident indicates that suspicious SQL activity has been detected. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious SQL activity might indicate that a threat actor is targeting your SQL server and is attempting to compromise it. <br><br> Scenario 2: This incident indicates that suspicious SQL activity has been detected. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious SQL activity might indicate that a threat actor is targeting your SQL server and is attempting to compromise it. |High| | **Security incident detected suspicious app service activity (Preview)** | Scenario 1: This incident indicates that suspicious activity has been detected in your app service environment. Multiple alerts from different Defender for Cloud plans have been triggered on the same resource, which increases the fidelity of malicious activity in your environment. Suspicious app service activity might indicate that a threat actor is targeting your application and may be attempting to compromise it. <br><br> Scenario 2: This incident indicates that suspicious activity has been detected in your app service environment. Multiple alerts from different Defender for Cloud plans have been triggered from the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious app service activity might indicate that a threat actor is targeting your application and may be attempting to compromise it.ΓÇï | High |
-| **Security incident detected compromised machine** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and successfully compromised this machine.| Medium/High |
| **Security incident detected compromised machine with botnet communication** | This incident indicates suspicious botnet activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | | **Security incident detected compromised machines with botnet communication** | This incident indicates suspicious botnet activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High | | **Security incident detected compromised machine with malicious outgoing activity** | This incident indicates suspicious outgoing activity on your virtual machine. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
defender-for-cloud Powershell Sample Vulnerability Assessment Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-azure-sql.md
Your scan history isn't copied over to the new configuration. Your scan history
## Prerequisites - The user should have `Storage Blob Data Reader` role on the storage account.
Your scan history isn't copied over to the new configuration. Your scan history
## Sample script - MigratingToExpressConfiguration.ps1 ```powershell #Requires -Modules @{ ModuleName="Az.Sql"; ModuleVersion="3.11.0" }
defender-for-cloud Powershell Sample Vulnerability Assessment Baselines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-baselines.md
This PowerShell script sets up baselines based on latest [vulnerability assessment](sql-azure-vulnerability-assessment-overview.md) scan results for all databases in an Azure SQL Server. ## Sample script ```powershell <#
defender-for-cloud Prepurchase Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/prepurchase-plan.md
A Defender for Cloud prepurchase applies to all Defender for Cloud plans. You ca
ThereΓÇÖs no ratio on which the DCUs are applied. DCUs are equivalent to the purchase currency value and are deducted at retail prices. Like other reservations, the benefit of a prepurchase plan is discounted pricing by committing to a purchase term. The more you buy, the larger the discount you receive.
-For example, if you purchase 5,000 Commit Units for a one year term, you get a 20% discount on Defender for Cloud products at this tier, so you pay only 4,000 USD. You can use these units with Defender for Servers P2 and Defender CSPM plans on 20 Virtual machines (Azure VMs) for one year, which uses up 4800 Commit units. In this example, we use $15/$5 monthly retail price and 1 DCU = $1.
+For example, if you purchase 5,000 Commit Units for a one-year term, you get a 10% discount on Defender for Cloud products at this tier, so you pay only 4,500 USD. You can choose to use these units with Defender for Servers P2, and Defender CSPM plans on 20 Virtual machines (Azure VMs) for one year, which uses up 4800 Commit units. In this example, we use $15/$5 monthly retail price and 1 MCU = $1.
+
+As another example, an enterprise customer has an Annual Commitment Discount (ACD) of 10%. This customer typically consumes 120,000 USD worth of Microsoft Defender for Cloud at retail prices annually. With the ACD applied, their actual usage cost is 108,000 USD.
+
+This year, the customer decided to purchase 100,000 Defender Credit Units (DCUs) for 82,000 USD, which includes an 18% discount off the retail prices. This discount isn't combined with the ACD, effectively making the actual discount rate 8%.
+
+As DCUs are consumed at retail prices, the customer would still need to use 20,000 USD worth of Defender for Cloud at the pay-as-you-go (PAYG) rates, applying the ACD discount.
+
+At the end of the commitment period, the actual Defender for Cloud usage cost for the customer would be 82,000 USD for the DCUs (reflecting the price with the 18% discount) plus 18,000 USD for the PAYG consumption (reflecting the 10% ACD discount). This totals to 100,000 USD.
> [!NOTE] > The mentioned prices are for example purposes only. They aren't intended to represent actual costs.
You can buy Defender for Cloud plans in the [Azure portal](https://portal.azure.
:::image type="content" source="media/prepay-reserved-capacity/purchase-reservations.png" alt-text="Screenshot of purchase reservations for Defender for Cloud." lightbox="media/prepay-reserved-capacity/purchase-reservations.png":::
+> [!NOTE]
+>
+> - The prices listed on the **Reservation** page are always presented in USD.
+> - Defender Credit Units are deducted at USD retail prices.
+ ## Change scope and ownership You can make the following types of changes to a reservation after purchase:
defender-for-cloud Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/privacy.md
Last updated 05/30/2024
This article provides information about how you can manage the user data in Microsoft Defender for Cloud. Managing user data includes the ability to access, delete, or export data. A Defender for Cloud user assigned the role of Reader, Owner, Contributor, or Account Administrator can access customer data within the tool. To learn more about the Account Administrator role, see [Built-in roles for Azure role-based access control](../role-based-access-control/built-in-roles.md) to learn more about the Reader, Owner, and Contributor roles. See [Azure subscription administrators](../cost-management-billing/manage/add-change-subscription-administrator.md).
defender-for-cloud Quickstart Automation Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-automation-alert.md
The examples in this quickstart assume you have an existing Logic App. To deploy
## ARM template tutorial If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
When no longer needed, delete the workflow automation using the Azure portal.
## Bicep tutorial ### Review the Bicep file
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
If you're looking for items older than six months, you can find them in the [Arc
|Date | Update | |--|--|
+| June 27 | [General Availability of Checkov IaC Scanning in Defender for Cloud](#general-availability-of-checkov-iac-scanning-in-defender-for-cloud) |
+| June 27 | [Four security incidents have been deprecated](#four-security-incidents-have-been-deprecated) |
| June 24 | [Change in pricing for Defender for Containers in multicloud](#change-in-pricing-for-defender-for-containers-in-multicloud) | | June 10 | [Copilot for Security in Defender for Cloud (Preview)](#copilot-for-security-in-defender-for-cloud-preview) |
+### General Availability of Checkov IaC Scanning in Defender for Cloud
+
+June 27, 2024
+
+We are announcing the general availability of the Checkov integration for Infrasturcture-as-Code (IaC) scanning through [MSDO](azure-devops-extension.yml). As part of this release, Checkov will be replacing Terrascan as a default IaC analyzer that runs as part of the MSDO CLI. Terrascan may still be configured manually through MSDO's [environment variables](https://github.com/microsoft/security-devops-azdevops/wiki) but will not run by default.
+
+Security findings from Checkov will be represented as recommendations for both Azure DevOps and GitHub repositories under the assessments "Azure DevOps repositories should have infrastructure as code findings resolved" and "GitHub repositories should have infrastructure as code findings resolved".
+
+To learn more about DevOps security in Defender for Cloud, see the [DevOps Security Overview](defender-for-devops-introduction.md). To learn how to configure the MSDO CLI, see the [Azure DevOps](azure-devops-extension.yml) or [GitHub](github-action.md) documentation.
+
+### Four security incidents have been deprecated
+
+June 27, 2024
+
+The following security incidents are deprecated from the Defender for Cloud portal:
+
+| Alert | Description | Severity |
+|--|--|--|
+| **Security incident detected suspicious source IP activity** | This incident indicates that suspicious activity has been detected on the same source IP. Multiple alerts from different Defender for Cloud plans have been triggered on the same IP address, which increases the fidelity of malicious activity in your environment. Suspicious activity on the same IP address might indicate that an attacker has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
+| **Security incident detected on multiple resources** | This incident indicates that suspicious activity had been detected on your cloud resources. Multiple alerts from different Defender for Cloud plan have been triggered, revealing similar attack methods were performed on your cloud resources. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
+| **Security incident detected compromised machine** | This incident indicates suspicious activity on one or more of your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered in chronological order on the same resource, following the MITRE ATT&CK framework. This might indicate a threat actor has gained unauthorized access to your environment and successfully compromised this machine.| Medium/High |
+| **Security incident detected suspicious virtual machines activity** | This incident indicates suspicious activity on your virtual machines. Multiple alerts from different Defender for Cloud plans have been triggered revealing a similar pattern on your virtual machines. This might indicate a threat actor has gained unauthorized access to your environment and is attempting to compromise it. | Medium/High |
+
+The security value of these incidents are now available through the Microsoft Defender XDR portal. Learn more about [alerts and incidents in Defender XDR](concept-integration-365.md).
+ ### Change in pricing for Defender for Containers in multicloud June 24, 2024
Learn more about [Defender for open-source databases](defender-for-databases-int
| April 3 | [Defender for open-source relational databases updates](#defender-for-open-source-relational-databases-updates) | | April 2 | [Update to recommendations to align with Azure AI Services resources](#update-to-recommendations-to-align-with-azure-ai-services-resources) | | April 2 | [Deprecation of Cognitive Services recommendation](#deprecation-of-cognitive-services-recommendation) |
-| April 2 | [Containers multicloud recommendations (GA)](#containers-multicloud-recommendations-ga) |
### Defender for Containers is now generally available (GA) for AWS and GCP
This recommendation is already being covered by another networking recommendatio
See the [list of security recommendations](recommendations-reference.md).
-### Containers multicloud recommendations (GA)
-
-April 2, 2024
-
-As part of Defender for Containers multicloud general availability, the following recommendations are announced GA as well:
--- For Azure-
-| **Recommendation** | **Description** | **Assessment Key** |
-| | | |
-| Azure registry container images should have vulnerabilities resolved| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
-| Azure running container images should have vulnerabilities resolved| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 |
--- For GCP-
-| **Recommendation** | **Description** | **Assessment Key** |
-| | | |
-| GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure | Scans your GCP registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce |
-| GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) - Microsoft Azure | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Google Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 5cc3a2c1-8397-456f-8792-fe9d0d4c9145 |
--- For AWS-
-| **Recommendation** | **Description** | **Assessment Key** |
-| | | |
-| AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Scans your GCP registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. Scans your AWS registries container images for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c27441ae-775c-45be-8ffa-655de37362ce |
-| AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Elastic Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | 682b2595-d045-4cff-b5aa-46624eb2dd8f |
-
-The recommendations affect the secure score calculation.
- ## March 2024 |Date | Update |
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
# Containers support matrix in Defender for Cloud > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that is End Of Life (EOL) as of June 30, 2024. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
This article summarizes support information for Container capabilities in Microsoft Defender for Cloud.
Following are the features for each of the domains in Defender for Containers:
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó ACR registries <br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Container images in Docker V2 format <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> is currently unsupported <br> |
-| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.19 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12) <br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows Server 2016, 2019, 2022|
+| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.19 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9. (CentOS is End Of Life (EOL) as of June 30, 2024. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).)<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12) <br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows Server 2016, 2019, 2022 |
| Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions and configurations for Azure - Runtime threat protection
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó ECR registries <br> ΓÇó Container images in Docker V2 format <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images is currently unsupported <br> ΓÇó Public repositories <br> ΓÇó Manifest lists <br>|
-| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.19 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12)<br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows server 2016, 2019, 2022|
+| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.19 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9 (CentOS is End Of Life (EOL) as of June 30, 2024. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).)<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12)<br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows server 2016, 2019, 2022 |
| Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions/configurations support for AWS - Runtime threat protection
Outbound proxy without authentication and outbound proxy with basic authenticati
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó Google Registries (GAR, GCR) <br> ΓÇó Container images in Docker V2 format <br> ΓÇó Images with [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification <br> **Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images is currently unsupported <br> ΓÇó Public repositories <br> ΓÇó Manifest lists <br>|
-| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.19 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12)<br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows server 2016, 2019, 2022|
+| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.19 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9 (CentOS is End Of Life (EOL) as of June 30, 2024. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).)<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12)<br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows server 2016, 2019, 2022 |
| Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions/configurations support for GCP - Runtime threat protection
Outbound proxy without authentication and outbound proxy with basic authenticati
Defender for Containers relies on the **Defender sensor** for several features. The Defender sensor is supported on the following host operating systems: - Amazon Linux 2-- CentOS 8
+- CentOS 8 (CentOS is End Of Life (EOL) as of June 30, 2024. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).)
- Debian 10 - Debian 11 - Google Container-Optimized OS
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan. Previously updated : 06/20/2024 Last updated : 06/26/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| [Reminder of the deprecation scope of adaptive recommendations as of MMA deprecation](#reminder-of-the-deprecation-scope-of-adaptive-recommendations-as-of-mma-deprecation) | June 20, 2024 | August 2024 | | [SQL vulnerability assessment automatic enablement using express configuration on unconfigured servers](#sql-vulnerability-assessment-automatic-enablement-using-express-configuration-on-unconfigured-servers) | June 10, 2024 | July 10, 2024 | | [Changes to identity recommendations](#changes-to-identity-recommendations) | June 3, 2024 | July 2024 |
-| [Removal of FIM over AMA and release of new version over Defender for Endpoint](#removal-of-fim-over-ama-and-release-of-new-version-over-defender-for-endpoint) | May 1, 2024 | June 2024 |
-| [Deprecation of system update recommendations](#deprecation-of-system-update-recommendations) | May 1, 2024 | May 2024 |
-| [Deprecation of MMA related recommendations](#deprecation-of-mma-related-recommendations) | May 1, 2024 | May 2024 |
+| [Removal of FIM over AMA and release of new version over Defender for Endpoint](#removal-of-fim-over-ama-and-release-of-new-version-over-defender-for-endpoint) | May 1, 2024 | August 2024 |
+| [Deprecation of system update recommendations](#deprecation-of-system-update-recommendations) | May 1, 2024 | July 2024 |
+| [Deprecation of MMA related recommendations](#deprecation-of-mma-related-recommendations) | May 1, 2024 | July 2024 |
| [Deprecation of fileless attack alerts](#deprecation-of-fileless-attack-alerts) | April 18, 2024 | May 2024 | | [Change in CIEM assessment IDs](#change-in-ciem-assessment-ids) | April 16.2024 | May 2024 | | [Deprecation of encryption recommendation](#deprecation-of-encryption-recommendation) | April 3, 2024 | May 2024 |
Will be applied to the following recommendations:
**Announcement date: May 1, 2024**
-**Estimated date for change: June 2024**
+**Estimated date for change: August 2024**
As part of the [MMA deprecation and the Defender for Servers updated deployment strategy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341), all Defender for Servers security features will be provided via a single agent (MDE), or via agentless scanning capabilities, and without dependency on either Log Analytics Agent (MMA) or Azure Monitoring Agent (AMA). The new version of File Integrity Monitoring (FIM) over Microsoft Defender for Endpoint (MDE) allows you to meet compliance by monitoring critical files and registries in real-time, auditing changes, and detecting suspicious file content alterations.
-As part of this release, FIM experience over AMA will no longer be available through the Defender for Cloud portal beginning May 30th. For more information, see [File Integrity Monitoring experience - changes and migration guidance](prepare-deprecation-log-analytics-mma-agent.md#file-integrity-monitoring-experiencechanges-and-migration-guidance).
+As part of this release, FIM experience over AMA will no longer be available through the Defender for Cloud portal beginning August 2024. For more information, see [File Integrity Monitoring experience - changes and migration guidance](prepare-deprecation-log-analytics-mma-agent.md#file-integrity-monitoring-experiencechanges-and-migration-guidance).
## Deprecation of system update recommendations **Announcement date: May 1, 2024**
-**Estimated date for change: May 2024**
+**Estimated date for change: July 2024**
As use of the Azure Monitor Agent (AMA) and the Log Analytics agent (also known as the Microsoft Monitoring Agent (MMA)) is [phased out in Defender for Servers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341), the following recommendations that rely on those agents are set for deprecation:
The new recommendations based on Azure Update Manager integration [are Generally
**Announcement date: May 1, 2024**
-**Estimated date for change: May 2024**
+**Estimated date for change: July 2024**
As part of the [MMA deprecation and the Defender for Servers updated deployment strategy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341), all Defender for Servers security features will be provided via a single agent (MDE), or via agentless scanning capabilities, and without dependency on either Log Analytics Agent (MMA) or Azure Monitoring Agent (AMA).
As part of this, and in a goal to reduce complexity, the following recommendatio
| Auto provisioning of the Log Analytics agent should be enabled on subscriptions | MMA enablement | | Log Analytics agent should be installed on virtual machines | MMA enablement | | Log Analytics agent should be installed on Linux-based Azure Arc-enabled machines | MMA enablement |
-| Guest Configuration extension should be installed on machines | GC enablement |
-| Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity | GC enablement |
| Adaptive application controls for defining safe applications should be enabled on your machines | AAC | | Adaptive application controls for defining safe applications should be enabled on your machines | AAC |
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Cloud features may be dependent on a specific sensor version. Such features are
| Version / Patch | Release date | Scope | Supported until | | - | | -- | - | | **24.1** | | | |
+| 24.1.4 |06/2024 | Major |05/2025 |
| 24.1.3 |04/2024 | Major |03/2025 | | 24.1.2 |02/2024 | Major |01/2025 | | **23.2** | | | |
To understand whether a feature is supported in your sensor version, check the r
## Versions 24.1.x
+### Version 24.1.4
+
+**Release date**: 06/2024
+
+**Supported until**: 05/2025
+
+This version includes the following updates and enhancements:
+
+- [Malicious URL path alert](whats-new.md#malicious-url-path-alert)
+ ### Version 24.1.3 **Release date**: 04/2024
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | - [Malicious alert path](#malicious-alert-path)<br> |
+| **OT networks** | - [Malicious URL path alert](#malicious-url-path-alert)<br> |
-### Malicious alert path
+### Malicious URL path alert
The new alert, Malicious URL path, allows users to identify malicious paths in legitimate URLs. The Malicious URL path alert expands Defender for IoT's threat identification to include generic URL signatures, crucial for countering a wide range of cyber threats.
deployment-environments Quickstart Create Dev Center Project Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-dev-center-project-azure-resource-manager.md
Last updated 03/21/2024
This quickstart describes how to use an Azure Resource Manager template (ARM template) to create and configure a dev center and project for creating an environment. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
dev-box Monitor Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/monitor-dev-box.md
To use Log Analytics for the logs, follow these steps:
The following example shows how to enable diagnostic logs via the Azure PowerShell Cmdlets. #### Enable diagnostic logs in a storage account
dev-box Quickstart Configure Dev Box Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-arm-template.md
Last updated 11/28/2023
This quickstart describes how to use an Azure Resource Manager (ARM) template to set up the Microsoft Dev Box Service in Azure. This [Dev Box with customized image](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.devcenter/devbox-with-customized-image) template deploys a simple Dev Box environment that you can use for testing and exploring the service.
devtest-labs Add Artifact Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-repository.md
You can also create custom artifacts that aren't available in the public artifac
This article shows you how to add an artifact repository to your lab by using the Azure portal, an Azure Resource Management (ARM) template, or Azure PowerShell. You can also use an Azure PowerShell or Azure CLI script to automate adding an artifact repository to a lab. ## Prerequisites To add an artifact repository to a lab, you need to know the Git HTTPS clone URL and the personal access token for the GitHub or Azure Repos repository that has the artifact files.
devtest-labs Add Artifact Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-vm.md
To install artifacts on an existing VM:
## Add artifacts to VMs by using Azure PowerShell The following PowerShell script applies an artifact to a VM by using the [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) cmdlet.
devtest-labs Create Lab Windows Vm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-bicep.md
If you don't have an Azure subscription, [create a free account](https://azure.m
## Review the Bicep file The Bicep file defines the following resource types:
devtest-labs Create Lab Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-template.md
Last updated 06/19/2024
In this quickstart, you use an Azure Resource Manager (ARM) template to create a lab in Azure DevTest Labs that has one Windows Server 2019 Datacenter virtual machine (VM) in it. DevTest Labs can use ARM templates for many tasks, from creating and provisioning labs to adding users. This quickstart uses the [Creates a lab with a claimed VM](https://azure.microsoft.com/resources/templates/dtl-create-lab-windows-vm-claimed) ARM template from the [Azure Quickstart Templates gallery](/samples/browse/?expanded=azure&products=azure-resource-manager).
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-add-devtest-user.md
To add a member:
<a name="add-an-external-user-to-a-lab-using-powershell"></a> ### Add a DevTest Labs User to a lab by using Azure PowerShell You can add a DevTest Labs User to a lab by using the following Azure PowerShell script. The script requires the user to be in the Microsoft Entra ID. For information about adding an external user to Microsoft Entra ID as a guest, see [Add a new guest user](../active-directory/fundamentals/add-users-azure-active-directory.md#add-a-new-guest-user). If the user isn't in Microsoft Entra ID, use the portal procedure instead.
devtest-labs Devtest Lab Create Custom Image From Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-custom-image-from-vhd-using-powershell.md
[!INCLUDE [devtest-lab-upload-vhd-options](../../includes/devtest-lab-upload-vhd-options.md)] ## PowerShell steps
devtest-labs Devtest Lab Create Environment From Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-create-environment-from-arm.md
If you need to create multiple environments for development or testing scenarios
Lab owners and administrators can use Azure PowerShell to create VMs and environments from ARM templates. You can also automate deployment through the Azure CLI by using the [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create) command to create environments. For more information, see [Deploy resources with ARM templates and the Azure CLI](../azure-resource-manager/templates/deploy-cli.md). Automate ARM environment template deployment with Azure PowerShell with these steps:
devtest-labs Devtest Lab Integrate Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-integrate-ci-cd.md
For more information and details, see [Use a Resource Manager template](devtest-
Next, create a script to collect the values that task steps like **Azure File Copy** and **PowerShell on Target Machines** use to deploy apps to VMs. You'd ordinarily use these tasks to deploy your own apps to your Azure VMs. The tasks require values such as the VM resource group name, IP address, and FQDN. Save the following script with a name like *GetLabVMParams.ps1*, and check it in to your project's source control system.
devtest-labs Devtest Lab Use Arm And Powershell For Lab Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-arm-and-powershell-for-lab-resources.md
Last updated 09/30/2023
Azure DevTest Labs can use Azure Resource Manager (ARM) templates for many tasks, from creating and provisioning labs and virtual machines (VMs) to adding users. In DevTest Labs, you can:
In Azure CLI, use the commands [az lab vm create](/cli/azure/lab/vm#az-lab-vm-cr
In Azure PowerShell, use [New-AzResource](/powershell/module/az.resources/new-azresource) and [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) to provision VMs with ARM templates. Lab administrators can deploy ARM templates to create claimable lab VMs or image factory golden images. Provisioning VMs with PowerShell requires administrator permissions. Lab users can then use the custom images to create VM instances. For more information and instructions, see [Create a DevTest Labs VM with Azure PowerShell](devtest-lab-vm-powershell.md).
devtest-labs Devtest Lab Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-resource-manager-template.md
You can use Azure Resource Manager (ARM) templates to create preconfigured Azure virtual machines (VMs) in Azure DevTest Labs. Single-VM ARM templates use the [Microsoft.DevTestLab/labs/virtualmachines](/azure/templates/microsoft.devtestlab/2018-09-15/labs/virtualmachines) resource type. Each VM created with this resource type appears as a separate item in the lab's **My virtual machines** list.
devtest-labs Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-cli.md
This article includes sample bash scripts built for Azure CLI for Azure DevTest
[!INCLUDE [sample-cli-install](../../includes/sample-cli-install.md)] All of these scripts have the following prerequisite:
devtest-labs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-powershell.md
Last updated 09/30/2023
This article includes the sample Azure PowerShell scripts for Azure Lab Services. This article includes the following samples:
digital-twins Resources Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/resources-customer-data-requests.md
To help keep you in control of personal data, this article describes how to iden
Azure Digital Twins is a developer platform for creating secure digital representations of business environments. It can be used to store information about people and places, and works with [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) to identify users and administrators with access to the environment. To view, export, and delete personal data that may be referenced in a data subject request, an Azure Digital Twins administrator can use the [Azure portal](https://portal.azure.com/) for users and roles, or the [Azure Digital Twins REST APIs](/rest/api/azure-digitaltwins/) for digital twins. The Azure portal and REST APIs provide different methods for users to service such data subject requests. ## Identify personal data
dms Create Dms Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-bicep.md
Use Bicep to deploy an instance of the Azure Database Migration Service. ## Prerequisites
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
Use this Azure Resource Manager template (ARM template) to deploy an instance of the Azure Database Migration Service. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
dns Delegate Subdomain Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/delegate-subdomain-ps.md
If you prefer, you can also delegate a subdomain using the [Azure portal](delega
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
dns Dns Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-bicep.md
This quickstart describes how to use Bicep to create a DNS zone with an `A` record in it. ## Prerequisites
dns Dns Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create a DNS zone with an `A` record in it. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
dns Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-cli.md
A DNS zone is used to host the DNS records for a particular domain. To start hos
Azure DNS also supports private DNS zones. To learn more about private DNS zones, see [Using Azure DNS for private domains](private-dns-overview.md). For an example on how to create a private DNS zone, see [Get started with Azure DNS private zones using CLI](./private-dns-getstarted-cli.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
dns Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-powershell.md
# Quickstart: Create an Azure DNS zone and record using Azure PowerShell In this quickstart, you create your first DNS zone and record using Azure PowerShell. You can also perform these steps using the [Azure portal](dns-getstarted-portal.md) or the [Azure CLI](dns-getstarted-cli.md).
Azure DNS also supports creating private domains. For step-by-step instructions
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell ## Create the resource group
dns Dns Operations Recordsets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets-portal.md
Previously updated : 06/07/2024 Last updated : 06/27/2024
You can use the Azure portal to remove records from a record set. Removing the l
2. A message appears asking if you want to delete the record set. 3. Verify that the name matches the record set that you want to delete, and then select **Yes**.
- ![A screenshot of adding new records to a recordset.](./media/dns-operations-recordsets-portal/delete-record-set.png)
+ ![A screenshot of deleting a recordset.](./media/dns-operations-recordsets-portal/delete-record-set.png)
4. On the **DNS zone** page, verify that the record set is no longer visible.
+> [!NOTE]
+> If an IP address associated with a recrodset is [locked](/azure/azure-resource-manager/management/lock-resources) you must remove the lock prior to deleting the recordset.
+ ## Work with NS and SOA records NS and SOA records that are automatically created are managed differently from other record types.
dns Dns Operations Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-operations-recordsets.md
This article shows you how to manage DNS records for your DNS zone by using Azur
The examples in this article assume you have already [installed Azure PowerShell, signed in, and created a DNS zone](dns-operations-dnszones.md). ## Introduction
dns Dns Private Resolver Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-bicep.md
This quickstart describes how to use Bicep to create Azure DNS Private Resolver. The following figure summarizes the general setup used. Subnet address ranges used in templates are slightly different than those shown in the figure.
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
This article walks you through the steps to create your first private DNS zone and record using Azure PowerShell. If you prefer, you can complete this quickstart using [Azure portal](private-dns-getstarted-portal.md). Azure DNS Private Resolver is a new service that enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
dns Dns Private Resolver Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM template) to create Azure DNS Private Resolver. The following figure summarizes the general setup used. Subnet address ranges used in templates are slightly different than those shown in the figure.
dns Dns Protect Private Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-private-zones-recordsets.md
ms.devlang: azurecli
# How to protect private DNS zones and records Private DNS zones and records are critical resources. Deleting a DNS zone or a single DNS record can result in a service outage. It's important that DNS zones and records are protected against unauthorized or accidental changes.
dns Dns Protect Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-zones-recordsets.md
ms.devlang: azurecli
# How to protect DNS zones and records DNS zones and records are critical resources. Deleting a DNS zone or a single DNS record can result in a service outage. It's important that DNS zones and records are protected against unauthorized or accidental changes.
dns Dns Reverse Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-for-azure-services.md
# Configure reverse DNS for services hosted in Azure This article explains how to configure reverse DNS lookups for services hosted in Azure.
dns Dns Reverse Dns Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-hosting.md
ms.devlang: azurecli
# Host reverse DNS lookup zones in Azure DNS This article explains how to host reverse DNS lookup zones for your assigned IP ranges with Azure DNS. The IP ranges represented by the reverse lookup zones must be assigned to your organization, typically by your ISP.
dns Dns Web Sites Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-web-sites-custom-domain.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
> [!NOTE] > In this tutorial, `contoso.com` is used as an example domain name. Replace `contoso.com` with your own domain name. ## Sign in to Azure
dns Private Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-cli.md
A DNS zone is used to host the DNS records for a particular domain. To start hos
:::image type="content" source="media/private-dns-portal/private-dns-quickstart-summary.png" alt-text="Summary diagram of the quickstart setup." border="false" lightbox="media/private-dns-portal/private-dns-quickstart-summary.png"::: ## Prerequisites
dns Private Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-powershell.md
This article walks you through the steps to create your first private DNS zone and record using Azure PowerShell. A DNS zone is used to host the DNS records for a particular domain. To start hosting your domain in Azure DNS, you need to create a DNS zone for that domain name. Each DNS record for your domain is then created inside this DNS zone. To publish a private DNS zone to your virtual network, you specify the list of virtual networks that are allowed to resolve records within the zone. These are called *linked* virtual networks. When autoregistration is enabled, Azure DNS also updates the zone records whenever a virtual machine is created, changes its' IP address, or is deleted.
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
If you prefer, you can complete this quickstart using [Azure CLI](private-dns-getstarted-cli.md). ## Create the resource group
dns Dns Cli Create Dns Zone Record https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/dns-cli-create-dns-zone-record.md
This Azure CLI script example creates a DNS zone and record for a domain name.
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)] ## Sample script
dns Find Unhealthy Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md
The following Azure PowerShell script finds unhealthy DNS records in Azure DNS public zones. ```azurepowershell-interactive <#
event-grid Blob Event Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-bicep.md
Azure Event Grid is an eventing service for the cloud. In this article, you use a Bicep file to create a Blob storage account, subscribe to events for that blob storage, and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages. ## Prerequisites
event-grid Blob Event Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-portal.md
In this article, you use the Azure portal to do the following tasks:
1. Trigger an event by uploading a file to the blob storage. 1. View the result in a handler web app. Typically, you send events to an endpoint that processes the event data and takes actions. To keep it simple, you send events to a web app that collects and displays the messages. When you're finished, you see that the event data has been sent to the web app.
event-grid Blob Event Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-template.md
Azure Event Grid is an eventing service for the cloud. In this article, you use an Azure Resource Manager template (ARM template) to create a Blob storage account, subscribe to events for that blob storage, and trigger an event to view the result. Typically, you send events to an endpoint that processes the event data and takes actions. However, to simplify this article, you send the events to a web app that collects and displays the messages. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
event-grid Custom Event Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart-portal.md
In this article, you use the Azure portal to do the following tasks:
## Prerequisites [!INCLUDE [register-provider.md](./includes/register-provider.md)]
event-grid Custom Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart-powershell.md
When you're finished, you see that the event data has been sent to the web app.
![View results](./media/custom-event-quickstart-powershell/view-result.png) This article requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
event-grid Custom Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart.md
When you're finished, you see that the event data has been sent to the web app.
:::image type="content" source="./media/custom-event-quickstart/viewer-record-inserted-event.png" alt-text="Screenshot showing the Event Grid Viewer sample with a sample event."::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
event-grid Custom Event To Eventhub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-eventhub.md
Title: 'Quickstart: Send custom events to Event Hubs - Event Grid, Azure CLI'
-description: 'Quickstart: Use Azure Event Grid and Azure CLI to publish a topic, and subscribe to that event. An event hub is used for the endpoint.'
+ Title: 'Quickstart: Send custom events to an event hub - Event Grid, Azure CLI'
+description: Learn how to use Azure Event Grid and the Azure CLI to publish a topic and subscribe to that event, by using an event hub for the endpoint.
Last updated 01/31/2024
-# Quickstart: Route custom events to Azure Event Hubs with Azure CLI and Event Grid
+# Quickstart: Route custom events to an event hub by using Event Grid and the Azure CLI
-[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications using events. Event Grid delivers events to [supported event handlers](event-handlers.md) and Azure Event Hubs is one of them. In this article, you use Azure CLI for the following steps:
+[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications via events. Event Grid delivers events to [supported event handlers](event-handlers.md), and Azure Event Hubs is one of them.
-1. Create an Event Grid custom topic.
-1. Create an Azure Event Hubs subscription for the custom topic.
-1. Send sample events to the custom topic.
-1. Verify that those events are delivered to the event hub.
+In this quickstart, you use the Azure CLI to create an Event Grid custom topic and an Event Hubs subscription for that topic. You then send sample events to the custom topic and verify that those events are delivered to an event hub.
## Create a resource group
-Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
+Event Grid topics are Azure resources, and they must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **gridResourceGroup** in the **westus2** location.
+Create a resource group by using the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named `gridResourceGroup` in the `westus2` location.
-> [!NOTE]
-> Select **Try it** next to the CLI example to launch Cloud Shell in the right pane. Select **Copy** button to copy the command, paste it in the Cloud Shell window, and then press ENTER to run the command.
+Select **Open Cloud Shell** to open Azure Cloud Shell on the right pane. Select the **Copy** button to copy the command, paste it in Cloud Shell, and then select the Enter key to run the command.
```azurecli-interactive az group create --name gridResourceGroup --location westus2
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a Domain Name System (DNS) entry.
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group.
-1. Specify a name for the topic.
+Replace `<TOPIC NAME>` with a unique name for your custom topic. The Event Grid topic name must be unique because a Domain Name System (DNS) entry represents it.
+
+1. Specify a name for the topic:
```azurecli-interactive topicname="<TOPIC NAME>"
- ```
-1. Run the following command to create the topic.
+ ```
+
+1. Run the following command to create the topic:
```azurecli-interactive az eventgrid topic create --name $topicname -l westus2 -g gridResourceGroup
An Event Grid topic provides a user-defined endpoint that you post your events t
## Create an event hub
-Before subscribing to the custom topic, let's create the endpoint for the event message. You create an event hub for collecting the events.
+Before you subscribe to the custom topic, create the endpoint for the event message. You create an event hub for collecting the events.
-1. Specify a unique name for the Event Hubs namespace.
+1. Specify a unique name for the Event Hubs namespace:
```azurecli-interactive namespace="<EVENT HUBS NAMESPACE NAME>" ```
-1. Run the following commands to create an Event Hubs namespace and an event hub named `demohub` in that namespace.
+1. Run the following commands to create an Event Hubs namespace and an event hub named `demohub` in that namespace:
```azurecli-interactive hubname=demohub
Before subscribing to the custom topic, let's create the endpoint for the event
## Subscribe to a custom topic
-You subscribe to an Event Grid topic to tell Event Grid which events you want to track. The following example subscribes to the custom topic you created, and passes the resource ID of the event hub for the endpoint. The endpoint is in the format:
+You subscribe to an Event Grid topic to tell Event Grid which events you want to track. The following example subscribes to the custom topic that you created, and it passes the resource ID of the event hub for the endpoint. The endpoint is in this format:
`/subscriptions/<AZURE SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.EventHub/namespaces/<NAMESPACE NAME>/eventhubs/<EVENT HUB NAME>`
-The following script gets the resource ID for the event hub, and subscribes to an Event Grid topic. It sets the endpoint type to `eventhub` and uses the event hub ID for the endpoint.
+The following script gets the resource ID for the event hub and subscribes to an Event Grid topic. It sets the endpoint type to `eventhub` and uses the event hub ID for the endpoint.
```azurecli-interactive hubid=$(az eventhubs eventhub show --name $hubname --namespace-name $namespace --resource-group gridResourceGroup --query id --output tsv)
The account that creates the event subscription must have write access to the ev
## Send an event to your custom topic
-Let's trigger an event to see how Event Grid distributes the message to your endpoint. First, let's get the URL and key for the custom topic.
+Trigger an event to see how Event Grid distributes the message to your endpoint. First, get the URL and key for the custom topic:
```azurecli-interactive endpoint=$(az eventgrid topic show --name $topicname -g gridResourceGroup --query "endpoint" --output tsv) key=$(az eventgrid topic key list --name $topicname -g gridResourceGroup --query "key1" --output tsv) ```
-To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, use CURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
+For the sake of simplicity in this article, you use sample event data to send to the custom topic. Typically, an application or an Azure service would send the event data.
+
+The cURL tool sends HTTP requests. In this article, you use cURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
```azurecli-interactive for i in 1 2 3
do
done ```
-On the **Overview** page for your Event Hubs namespace in the Azure portal, notice that Event Grid sent those three events to the event hub. You see the same chart on the **Overview** page for the `demohub` Event Hubs instance page.
+In the Azure portal, on the **Overview** page for your Event Hubs namespace, notice that Event Grid sent those three events to the event hub. You see the same chart on the **Overview** page for the `demohub` Event Hubs instance.
-Typically, you create an application that retrieves the events from the event hub. To create an application that gets messages from an event hub, see:
+Typically, you create an application that retrieves event messages from the event hub. For more information, see:
-* [Get started receiving messages with the Event Processor Host in .NET Standard](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)
-* [Receive events from Azure Event Hubs using Java](../event-hubs/event-hubs-java-get-started-send.md)
-* [Receive events from Event Hubs using Apache Storm](../event-hubs/event-hubs-storm-getstarted-receive.md)
+- [Get started receiving messages with the event processor host in .NET Standard](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)
+- [Receive events from Azure Event Hubs by using Java](../event-hubs/event-hubs-java-get-started-send.md)
+- [Receive events from Event Hubs by using Apache Storm](../event-hubs/event-hubs-storm-getstarted-receive.md)
## Clean up resources
-If you plan to continue working with this event, don't clean up the resources created in this article. Otherwise, use the following command to delete the resources you created in this article.
+
+If you plan to continue working with this event, don't clean up the resources that you created in this article. Otherwise, use the following command to delete the resources:
```azurecli-interactive az group delete --name gridResourceGroup ```
-## Next steps
+## Related content
Now that you know how to create topics and event subscriptions, learn more about what Event Grid can help you do: - [About Event Grid](overview.md)-- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json)
+- [Route Azure Blob Storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json)
- [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-logic-app.md) - [Stream big data into a data warehouse](event-hubs-integration.md)
-See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+To learn about publishing events to, and consuming events from, Event Grid by using various programming languages, see the following samples:
- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/) - [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
event-grid Custom Event To Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-function.md
Title: 'Quickstart: Send custom events to Azure Function - Event Grid'
-description: 'Quickstart: Use Azure Event Grid and Azure CLI or portal to publish a topic, and subscribe to that event. An Azure Function is used for the endpoint.'
+ Title: 'Quickstart: Send custom events to an Azure function - Event Grid'
+description: Learn how to use Azure Event Grid and the Azure CLI or portal to publish a topic and subscribe to that event, by using an Azure function for the endpoint.
Last updated 04/24/2024 ms.devlang: azurecli
-# Quickstart: Route custom events to an Azure Function with Event Grid
+# Quickstart: Route custom events to an Azure function by using Event Grid
-[Azure Event Grid](overview.md) is an eventing service for the cloud. Azure Functions is one of the [supported event handlers](event-handlers.md). In this article, you use the Azure portal to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. You send the events to an Azure Function.
+[Azure Event Grid](overview.md) is an event-routing service for the cloud. Azure Functions is one of the [supported event handlers](event-handlers.md).
+In this quickstart, you use the Azure portal to create a custom topic, subscribe to the custom topic, and trigger the event to view the result. You send the events to an Azure function.
-## Create an Azure function with Azure Event Grid trigger using Visual Studio Code
-In this section, you use Visual Studio Code to create an Azure function with an Azure Event Grid trigger.
+
+## Create a function with an Event Grid trigger by using Visual Studio Code
+
+In this section, you use Visual Studio Code to create a function with an Event Grid trigger.
### Prerequisites
-* [Visual Studio Code](https://code.visualstudio.com/) installed on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-* [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions).
+* [Visual Studio Code](https://code.visualstudio.com/) installed on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms)
+* [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions)
+
+### Create a function
+
+1. Open Visual Studio Code.
+
+1. On the left bar, select **Azure**.
+
+1. On the left pane, in the **WORKSPACE** section, select the **Azure Functions** button on the command bar, and then select **Create Function**.
+
+ :::image type="content" source="./media/custom-event-to-function/visual-studio-code-new-function-menu.png" alt-text="Screenshot that shows the Azure tab of Visual Studio Code with the menu command for creating a function.":::
-### Create an Azure function
+1. Select a folder where you want to save the function code.
-1. Launch Visual Studio Code.
-1. On the left bar, select **Azure**.
-1. In the left pane, In the **WORKSPACE** section, select Azure Functions button on the command bar, and then select **Create Function**.
+1. For the **Create new project** command, for **Language**, select **C#**, and then select the Enter key.
- :::image type="content" source="./media/custom-event-to-function/visual-studio-code-new-function-menu.png" alt-text="Screenshot that shows the Azure tab of Visual Studio Code with New function menu selected.":::
-1. Select a folder where you want the Azure function code to be saved.
-1. For the **Create new project** command, for **language**, select **C#**, and press ENTER.
+ :::image type="content" source="./media/custom-event-to-function/select-function-language.png" alt-text="Screenshot that shows the selection of C Sharp as the language for developing an Azure function." lightbox="./media/custom-event-to-function/select-function-language.png":::
+1. For **.NET runtime**, select **.NET 8.0 Isolated LTS**, and then select the Enter key.
- :::image type="content" source="./media/custom-event-to-function/select-function-language.png" alt-text="Screenshot that shows the selection of C# for the language used to develop the Azure function." lightbox="./media/custom-event-to-function/select-function-language.png":::
-1. For **.NET runtime**, select **.NET 8.0 Isolated LTS**, and press ENTER.
-1. For the **template for the function**, select **Azure Event Grid trigger**, and press ENTER.
-1. For **function name**, enter a name for your Azure function, and press ENTER.
-1. Enter a name for the **namespace** for the function, and press ENTER.
-1. Open the project in the current window or a new window or add to a workspace.
-1. Wait for the function to be created. You see the status of the function creation in the bottom-right corner.
-
- :::image type="content" source="./media/custom-event-to-function/function-creation-status.png" alt-text="Screenshot that shows the status of the function creation." lightbox="./media/custom-event-to-function/function-creation-status.png":::
-1. View the code in the YourFunctionName.cs file, specifically the `Run` method. It prints the information using a logger.
+1. For **Template for the function**, select **Azure Event Grid trigger**, and then select the Enter key.
+
+1. For **Function name**, enter a name for your function, and then select the Enter key.
+
+1. For **Namespace**, enter a name for the function's namespace, and then select the Enter key.
+
+1. Open the project in the current window or a new window, or add it to a workspace.
+
+1. Wait for the function to be created. The status of the function creation appears in the lower-right corner.
+
+ :::image type="content" source="./media/custom-event-to-function/function-creation-status.png" alt-text="Screenshot that shows the status of function creation." lightbox="./media/custom-event-to-function/function-creation-status.png":::
+1. View the code in the *YourFunctionName.cs* file, specifically the `Run` method. It prints the information by using a logger.
```csharp [Function(nameof(MyEventGridTriggerFunc))]
In this section, you use Visual Studio Code to create an Azure function with an
### Deploy the function to Azure
-1. Select the Azure button on the left bar if it's not already open.
-1. Hover the mouse over your project, and select the **Deploy to Azure** button.
+1. Select the **Azure** button on the left bar if the **Azure** pane isn't already open.
+
+1. Hover over your project and select the **Deploy to Azure** button.
- :::image type="content" source="./media/custom-event-to-function/deploy-to-azure-button.png" alt-text="Screenshot that shows selection of the Deploy to Azure button." lightbox="./media/custom-event-to-function/deploy-to-azure-button.png":::
-1. In the drop-down of the command palette, select **+ Create new function app**, and press ENTER.
-1. Enter a **globally unique name** for the new function app, and press ENTER.
-1. For **runtime stack**, select **.NET 8 Isolated**.
-1. For **location** for your Azure resources, select a region that's close to you.
-1. Now, you see the status of Azure Functions app creation in the **AZURE** tab of the bottom pane. After the function app is created, you see the status of deploying the Azure function you created locally to the Functions app you created.
-1. After the deployment succeeds, expand the **Create Function App succeeded** message and select **Click to view resource**. You see that your Azure function is selected in the **RESOURCES** section on the left pane.
-1. Right-click on your Azure function, and select **Open in Portal**.
+ :::image type="content" source="./media/custom-event-to-function/deploy-to-azure-button.png" alt-text="Screenshot that shows the button for deploying to Azure." lightbox="./media/custom-event-to-function/deploy-to-azure-button.png":::
+1. In the dropdown list of the command palette, select **+ Create new function app**, and then select the Enter key.
- :::image type="content" source="./media/custom-event-to-function/click-to-view-functions-app.png" alt-text="Screenshot that shows the selection of Click to view resource in the AZURE tab in the bottom pane." lightbox="./media/custom-event-to-function/click-to-view-functions-app.png":::
-1. Sign-in to Azure if needed, and you should see the **Function App** page for your Azure function.
-1. Select your **function** in the bottom page as shown in the following image.
+1. For **Name**, enter a globally unique name for the new function app, and then select the Enter key.
+
+1. For **Runtime stack**, select **.NET 8 Isolated**.
+
+1. For **Location** for your Azure resources, select a region that's close to you.
+
+1. The status of function app creation appears on the **AZURE** tab of the bottom pane. After the function app is created, you see the status of deploying the function that you created locally to the function app.
+
+1. After the deployment succeeds, expand the **Create Function App succeeded** message and select **Click to view resource**. Confirm that your function is selected in the **RESOURCES** section on the left pane.
+
+1. Right-click your function, and then select **Open in Portal**.
+
+ :::image type="content" source="./media/custom-event-to-function/click-to-view-functions-app.png" alt-text="Screenshot that shows selections for opening a function in the portal." lightbox="./media/custom-event-to-function/click-to-view-functions-app.png":::
+1. Sign in to Azure if necessary, and confirm that the **Function App** page appears for your function.
+
+1. On the bottom pane, select your function.
:::image type="content" source="./media/custom-event-to-function/select-function.png" alt-text="Screenshot that shows the selection of an Azure function on the Function App page." lightbox="./media/custom-event-to-function/select-function.png":::
-1. Switch to the **Logs** tab and keep this tab or window open so that you can see logged messages when you send an event to an Event Grid later in this tutorial.
+1. Switch to the **Logs** tab. Keep this tab open so that you can see logged messages when you send an event to an Event Grid topic later in this tutorial.
- :::image type="content" source="./media/custom-event-to-function/function-logs-window.png" alt-text="Screenshot that shows Logs tab of an Azure function in the Azure portal." lightbox="./media/custom-event-to-function/function-logs-window.png":::
+ :::image type="content" source="./media/custom-event-to-function/function-logs-window.png" alt-text="Screenshot that shows the Logs tab for a function in the Azure portal." lightbox="./media/custom-event-to-function/function-logs-window.png":::
## Create a custom topic
-An Event Grid topic provides a user-defined endpoint that you post your events to.
+An Event Grid topic provides a user-defined endpoint that you post your events to.
-1. On a new tab of the web browser window, sign in to [Azure portal](https://portal.azure.com/).
-2. In the search bar at the topic, search for **Event Grid Topics**, and select **Event Grid Topics**.
+1. On a new tab of the web browser window, sign in to the [Azure portal](https://portal.azure.com/).
- :::image type="content" source="./media/custom-event-to-function/select-topics.png" alt-text="Image showing the selection of Event Grid topics." lightbox="./media/custom-event-to-function/select-topics.png" :::
-3. On the **Event Grid Topics** page, select **+ Create** on the command bar.
+1. On the search bar at the topic, search for **Event Grid Topics**, and then select **Event Grid Topics**.
- :::image type="content" source="./media/custom-event-to-function/add-topic-button.png" alt-text="Screenshot showing the Create button to create an Event Grid topic." lightbox="./media/custom-event-to-function/add-topic-button.png":::
-4. On the **Create Topic** page, follow these steps:
- 1. Select your **Azure subscription**.
- 2. Select the same **resource group** from the previous steps.
- 3. Provide a unique **name** for the custom topic. The topic name must be unique because it's represented by a DNS entry. Don't use the name shown in the image. Instead, create your own name - it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and `-`.
- 4. Select a **location** for the Event Grid topic.
- 5. Select **Review + create**.
-
- :::image type="content" source="./media/custom-event-to-function/create-custom-topic.png" alt-text="Screenshot showing the Create Topic page.":::
- 1. On the **Review + create** page, review settings and select **Create**.
-5. After the custom topic has been created, select **Go to resource** link to see the following Event Grid topic page for the topic you created.
+ :::image type="content" source="./media/custom-event-to-function/select-topics.png" alt-text="Screenshot that shows the selection of Event Grid topics." lightbox="./media/custom-event-to-function/select-topics.png" :::
+1. On the **Topics** page, select **+ Create** on the command bar.
+
+ :::image type="content" source="./media/custom-event-to-function/add-topic-button.png" alt-text="Screenshot that shows the button for creating an Event Grid topic." lightbox="./media/custom-event-to-function/add-topic-button.png":::
+1. On the **Create Topic** pane, follow these steps:
+
+ 1. For **Subscription**, select your Azure subscription.
+ 1. For **Resource group**, select the same resource group from the previous steps.
+ 1. For **Name**, provide a unique name for the custom topic. The topic name must be unique because a Domain Name System (DNS) entry represents it.
+
+ Don't use the name shown in the example image. Instead, create your own name. It must be 3-50 characters and contain only the values a-z, A-Z, 0-9, and a hyphen (`-`).
+ 1. For **Region**, select a location for the Event Grid topic.
+ 1. Select **Review + create**.
+
+ :::image type="content" source="./media/custom-event-to-function/create-custom-topic.png" alt-text="Screenshot that shows the pane for creating a topic.":::
+ 1. On the **Review + create** tab, review settings and then select **Create**.
+1. After the custom topic is created, select the **Go to resource** link to open the **Event Grid Topic** page for that topic.
- :::image type="content" source="./media/custom-event-to-function/topic-home-page.png" lightbox="./media/custom-event-to-function/topic-home-page.png" alt-text="Image showing the home page for your Event Grid custom topic.":::
+ :::image type="content" source="./media/custom-event-to-function/topic-home-page.png" lightbox="./media/custom-event-to-function/topic-home-page.png" alt-text="Screenshot that shows the page for an Event Grid custom topic.":::
-## Subscribe to custom topic
+## Subscribe to a custom topic
You subscribe to an Event Grid topic to tell Event Grid which events you want to track, and where to send the events.
-1. Now, on the **Event Grid Topic** page for your custom topic, select **+ Event Subscription** on the toolbar.
+1. On the **Event Grid Topic** page for your custom topic, select **+ Event Subscription** on the toolbar.
- :::image type="content" source="./media/custom-event-to-function/new-event-subscription.png" alt-text="Image showing the selection of Add Event Subscription on the toolbar." lightbox="./media/custom-event-to-function/new-event-subscription.png":::
-2. On the **Create Event Subscription** page, follow these steps:
- 1. Enter a **name** for the event subscription.
+ :::image type="content" source="./media/custom-event-to-function/new-event-subscription.png" alt-text="Screenshot that shows the button for adding an event subscription on the toolbar." lightbox="./media/custom-event-to-function/new-event-subscription.png":::
+1. On the **Create Event Subscription** pane, follow these steps:
+
+ 1. For **Name**, enter a name for the event subscription.
1. For **Event Schema**, select **Cloud Event Schema v1.0**.
- 1. Select **Azure Function** for the **Endpoint type**.
- 1. Choose **Configure an endpoint**.
-
- :::image type="content" source="./media/custom-event-to-function/provide-subscription-values.png" alt-text="Image showing event subscription values.":::
- 5. On the **Select Azure Function** page, follow these steps:
- 1. Select the **Azure Subscription** that has the Azure function.
- 1. Select the **resource group** that has the function.
- 1. Select your Azure **Functions app**.
- 1. Select the Azure **function** in the Functions app.
+ 1. For **Endpoint Type**, select **Azure Function**.
+ 1. Select **Configure an endpoint**.
+
+ :::image type="content" source="./media/custom-event-to-function/provide-subscription-values.png" alt-text="Screenshot that shows event subscription values.":::
+ 1. On the **Select Azure Function** pane, follow these steps:
+ 1. For **Subscription**, select the Azure subscription that has the function.
+ 1. For **Resource group**, select the resource group that has the function.
+ 1. For **Function app**, select your function app.
+ 1. For **Function**, select the function in the function app.
1. Select **Confirm Selection**.
- :::image type="content" source="./media/custom-event-to-function/provide-endpoint.png" alt-text="Image showing the Select Azure Function page showing the selection of function you created earlier.":::
- 6. This step is optional, but recommended for production scenarios. On the **Create Event Subscription** page, switch to the **Advanced Features** tab, and set values for **Max events per batch** and **Preferred batch size in kilobytes**.
-
- Batching can give you high-throughput. For **Max events per batch**, set the maximum number of events that a subscription will include in a batch. Preferred batch size sets the preferred upper bound of batch size in kilo bytes, but can be exceeded if a single event is larger than this threshold.
-
- :::image type="content" source="./media/custom-event-to-function/enable-batching.png" alt-text="Image showing batching settings for an event subscription.":::
- 6. On the **Create Event Subscription** page, select **Create**.
+ :::image type="content" source="./media/custom-event-to-function/provide-endpoint.png" alt-text="Screenshot that shows the pane for selecting a previously created Azure function.":::
+ 1. This step is optional, but we recommend it for production scenarios. On the **Create Event Subscription** pane, go to the **Additional Features** tab and set values for **Max events per batch** and **Preferred batch size in kilobytes**.
+
+ Batching can give you high throughput. For **Max events per batch**, set the maximum number of events that a subscription will include in a batch. **Preferred batch size in kilobytes** sets the preferred upper bound of batch size, but it can be exceeded if a single event is larger than this threshold.
+
+ :::image type="content" source="./media/custom-event-to-function/enable-batching.png" alt-text="Screenshot that shows batching settings for an event subscription.":::
+ 1. On the **Create Event Subscription** pane, select **Create**.
## Send an event to your topic
-Now, let's trigger an event to see how Event Grid distributes the message to your endpoint. Use either Azure CLI or PowerShell to send a test event to your custom topic. Typically, an application or Azure service would send the event data.
+Now, trigger an event to see how Event Grid distributes the message to your endpoint. Use either the Azure CLI or Azure PowerShell to send a test event to your custom topic. Typically, an application or an Azure service would send the event data.
+
+The first example uses the Azure CLI. It gets the URL and key for the custom topic and sample event data. Use your custom topic name for `topicname`. It creates sample event data.
-The first example uses Azure CLI. It gets the URL and key for the custom topic, and sample event data. Use your custom topic name for `<topic name>`. It creates sample event data. The `data` element of the JSON is the payload of your event. Any well-formed JSON can go in this field. You can also use the subject field for advanced routing and filtering. CURL is a utility that sends HTTP requests.
+The `data` element of the JSON is the payload of your event. Any well-formed JSON can go in this field. You can also use the subject field for advanced routing and filtering.
+The cURL tool sends HTTP requests. In this article, you use cURL to send the event to the custom topic.
### Azure CLI
-1. In the Azure portal, select **Cloud Shell**. If you are in the PowerShell mode, select **Switch to Bash**.
- :::image type="content" source="./media/custom-event-quickstart-portal/cloud-shell-bash.png" alt-text="Image showing Cloud Shell - Bash window":::
-1. Set the `topicname` and `resourcegroupname` variables that are used in the commands.
+1. In the Azure portal, select **Cloud Shell**. If you're in Azure PowerShell mode, select **Switch to Bash**.
- Replace `TOPICNAME` with the name of your Event Grid topic.
+ :::image type="content" source="./media/custom-event-quickstart-portal/cloud-shell-bash.png" alt-text="Screenshot that shows the Bash window in Azure Cloud Shell.":::
+1. Set the `topicname` and `resourcegroupname` variables that are used in the commands.
+
+ Replace `TOPICNAME` with the name of your Event Grid topic.
```azurecli topicname="TOPICNAME" ```
- Replace `RESOURCEGROUPNAME` with the name of the Azure resource group that contains the Event Grid topic.
+ Replace `RESOURCEGROUPNAME` with the name of the Azure resource group that contains the Event Grid topic.
```azurecli resourcegroupname="RESOURCEGROUPNAME" ```
-1. Run the following command to get the **endpoint** for the topic: After you copy and paste the command, update the **topic name** and **resource group name** before you run the command.
+
+1. Use the following command to get the endpoint for the topic. After you copy and paste the command, update the topic name and resource group name before you run it.
```azurecli endpoint=$(az eventgrid topic show --name $topicname -g $resourcegroupname --query "endpoint" --output tsv) ```
-2. Run the following command to get the **key** for the custom topic: After you copy and paste the command, update the **topic name** and **resource group** name before you run the command.
+
+1. Use the following command to get the key for the custom topic. After you copy and paste the command, update the topic name and resource group name before you run it.
```azurecli key=$(az eventgrid topic key list --name $topicname -g $resourcegroupname --query "key1" --output tsv) ```
-3. Copy the following statement with the event definition, and press **ENTER**.
+
+1. Copy the following statement with the event definition, and then select the Enter key.
```json event='[ {"id": "'"$RANDOM"'", "eventType": "recordInserted", "subject": "myapp/vehicles/motorcycles", "eventTime": "'`date +%Y-%m-%dT%H:%M:%S%z`'", "data":{ "make": "Ducati", "model": "Monster"},"dataVersion": "1.0"} ]' ```
-4. Run the following **Curl** command to post the event:
+
+1. Run the following cURL command to post the event:
``` curl -X POST -H "aeg-sas-key: $key" -d "$event" $endpoint ```
-5. Confirm that you see the message from the Azure function in the **Logs** tab of your Azure function in the Azure portal.
- :::image type="content" source="./media/custom-event-quickstart-portal/function-log-output.png" alt-text="Screenshot that shows the Logs tab of an Azure function." lightbox="./media/custom-event-quickstart-portal/function-log-output.png":::
+1. Confirm that the message from the function appears on the **Logs** tab for your function in the Azure portal.
+
+ :::image type="content" source="./media/custom-event-quickstart-portal/function-log-output.png" alt-text="Screenshot that shows the Logs tab for an Azure function." lightbox="./media/custom-event-quickstart-portal/function-log-output.png":::
### Azure PowerShell
-The second example uses PowerShell to perform similar steps.
-1. In the Azure portal, select **Cloud Shell** (alternatively go to `https://shell.azure.com/`). Select **Switch to PowerShell** in the top-left corner of the Cloud Shell window. See the sample **Cloud Shell** window image in the Azure CLI section.
-2. Set the following variables. After you copy and paste each command, update the **topic name** and **resource group name** before you run the command:
+The second example uses Azure PowerShell to perform similar steps.
+
+1. In the Azure portal, select **Cloud Shell** (or go to the [Azure Cloud Shell page](https://shell.azure.com/)). In the upper-left corner of the Cloud Shell window, select **Switch to PowerShell**.
+
+1. Set the following variables. After you copy and paste each command, update the topic name and resource group name before you run it.
```powershell $resourceGroupName = "RESOURCEGROUPNAME"
The second example uses PowerShell to perform similar steps.
```powershell $topicName = "TOPICNAME" ```
-3. Run the following commands to get the **endpoint** and the **keys** for the topic:
+
+1. Run the following commands to get the endpoint and the keys for the topic:
```powershell $endpoint = (Get-AzEventGridTopic -ResourceGroupName $resourceGroupName -Name $topicName).Endpoint $keys = Get-AzEventGridTopicKey -ResourceGroupName $resourceGroupName -Name $topicName ```
-4. Prepare the event. Copy and run the statements in the Cloud Shell window.
+
+1. Prepare the event. Copy and run these statements in the Cloud Shell window:
```powershell $eventID = Get-Random 99999
The second example uses PowerShell to perform similar steps.
#Date format should be SortableDateTimePattern (ISO 8601) $eventDate = Get-Date -Format s
- #Construct body using Hashtable
+ #Construct the body by using a hash table
$htbody = @{ id= $eventID eventType="recordInserted"
The second example uses PowerShell to perform similar steps.
dataVersion="1.0" }
- #Use ConvertTo-Json to convert event body from Hashtable to JSON Object
- #Append square brackets to the converted JSON payload since they are expected in the event's JSON payload syntax
+ #Use ConvertTo-Json to convert the event body from a hash table to a JSON object
+ #Append square brackets to the converted JSON payload because they're expected in the event's JSON payload syntax
$body = "["+(ConvertTo-Json $htbody)+"]" ```
-5. Use the **Invoke-WebRequest** cmdlet to send the event.
+
+1. Use the `Invoke-WebRequest` cmdlet to send the event:
```powershell Invoke-WebRequest -Uri $endpoint -Method POST -Body $body -Headers @{"aeg-sas-key" = $keys.Key1} ```
-5. Confirm that you see the message from the Azure function in the **Logs** tab of your Azure function in the Azure portal.
- :::image type="content" source="./media/custom-event-quickstart-portal/function-log-output.png" alt-text="Screenshot that shows the Logs tab of an Azure function." lightbox="./media/custom-event-quickstart-portal/function-log-output.png":::
+1. Confirm that the message from the function appears on the **Logs** tab for your function in the Azure portal.
+
+ :::image type="content" source="./media/custom-event-quickstart-portal/function-log-output.png" alt-text="Screenshot that shows the Logs tab for a function." lightbox="./media/custom-event-quickstart-portal/function-log-output.png":::
-### Verify that function received the event
-You've triggered the event, and Event Grid sent the message to the endpoint you configured when subscribing.
+### Verify that the function received the event
-1. On the **Monitor** page for your Azure function, you see an invocation.
+You triggered the event, and Event Grid sent the message to the endpoint that you configured when subscribing. Now you can check whether the function received it.
- :::image type="content" source="./media/custom-event-to-function/monitor-page-invocations.png" alt-text="Screenshot showing the Invocations tab of the Monitor page.":::
-2. Select the invocation to see the details.
+1. On the **Monitor** page for your function, find an invocation.
- :::image type="content" source="./media/custom-event-to-function/invocation-details-page.png" alt-text="Screenshot showing the Invocation details.":::
-3. You can also use the **Logs** tab in the right pane to see the logged messages when you post events to the topic's endpoint.
+ :::image type="content" source="./media/custom-event-to-function/monitor-page-invocations.png" alt-text="Screenshot that shows the Invocations tab of the Monitor page.":::
+1. Select the invocation to display the details.
- :::image type="content" source="./media/custom-event-to-function/successful-function.png" lightbox="./media/custom-event-to-function/successful-function.png" alt-text="Image showing the Monitor view of the Azure function with a log.":::
+ :::image type="content" source="./media/custom-event-to-function/invocation-details-page.png" alt-text="Screenshot that shows invocation details.":::
+
+ You can also use the **Logs** tab on the right pane to see the logged messages when you post events to the topic's endpoint.
+
+ :::image type="content" source="./media/custom-event-to-function/successful-function.png" lightbox="./media/custom-event-to-function/successful-function.png" alt-text="Screenshot that shows the Monitor view of a function with a log.":::
## Clean up resources
-If you plan to continue working with this event, don't clean up the resources created in this article. Otherwise, delete the resources you created in this article.
-1. Select **Resource Groups** on the left menu. If you don't see it on the left menu, select **All Services** on the left menu, and select **Resource Groups**.
-2. Select the resource group to launch the **Resource Group** page.
-3. Select **Delete resource group** on the toolbar.
-4. Confirm deletion by entering the name of the resource group, and select **Delete**.
+If you plan to continue working with this event, don't clean up the resources that you created in this article. Otherwise, delete the resources that you created in this article.
+
+1. On the left menu, select **Resource groups**.
+
+ ![Screenshot that shows the page for resource groups](./media/custom-event-to-function/delete-resource-groups.png)
+
+ An alternative is to select **All Services** on the left menu, and then select **Resource groups**.
+1. Select the resource group to open the pane for its details.
+
+1. On the toolbar, select **Delete resource group**.
- ![Resource groups](./media/custom-event-to-function/delete-resource-groups.png)
+1. Confirm the deletion by entering the name of the resource group, and then select **Delete**.
- The other resource group you see in the image was created and used by the Cloud Shell window. Delete it if you don't plan to use the Cloud Shell window later.
+The Cloud Shell window created and used the other resource group that appears on the **Resource groups** page. Delete this resource group if you don't plan to use the Cloud Shell window later.
-## Next steps
+## Related content
Now that you know how to create topics and event subscriptions, learn more about what Event Grid can help you do: -- [About Event Grid](overview.md)-- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json)-- [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-logic-app.md)-- [Stream big data into a data warehouse](event-hubs-integration.md)
+* [About Event Grid](overview.md)
+* [Route Azure Blob Storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json)
+* [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-logic-app.md)
+* [Stream big data into a data warehouse](event-hubs-integration.md)
-See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+To learn about publishing events to, and consuming events from, Event Grid by using various programming languages, see the following samples:
-- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)-- [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)-- [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)-- [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)-- [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
+* [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/)
+* [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
+* [Azure Event Grid samples for Python](/samples/azure/azure-sdk-for-python/eventgrid-samples/)
+* [Azure Event Grid samples for JavaScript](/samples/azure/azure-sdk-for-js/eventgrid-javascript/)
+* [Azure Event Grid samples for TypeScript](/samples/azure/azure-sdk-for-js/eventgrid-typescript/)
event-grid Custom Event To Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-queue-storage.md
Title: 'Quickstart: Send custom events to storage queue - Event Grid, Azure CLI'
-description: 'Quickstart: Use Azure Event Grid and Azure CLI to publish a topic, and subscribe to that event. A storage queue is used for the endpoint.'
+ Title: 'Quickstart: Send custom events to a queue - Event Grid, Azure CLI'
+description: Learn how to use Azure Event Grid and the Azure CLI to publish a topic and subscribe to that event, by using a queue for the endpoint.
Last updated 01/31/2024
-# Quickstart: Route custom events to Azure Queue storage via Event Grid using Azure CLI
+# Quickstart: Route custom events to a queue by using Event Grid and the Azure CLI
-[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications using events. Event Grid delivers events to [supported event handlers](event-handlers.md) and Azure Queue storage is one of them. In this article, you use Azure CLI for the following steps:
+[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications via events. Event Grid delivers events to [supported event handlers](event-handlers.md), and Azure Queue storage is one of them.
-1. Create an Event Grid custom topic.
-1. Create an Azure Queue subscription for the custom topic.
-1. Send sample events to the custom topic.
-1. Verify that those events are delivered to Azure Queue storage.
+In this quickstart, you use the Azure CLI to create an Event Grid custom topic and a Queue Storage subscription for that topic. You then send sample events to the custom topic and verify that those events are delivered to a queue.
## Create a resource group
-Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
+Event Grid topics are Azure resources, and they must be placed in an Azure resource group. The resource group is a logical collection into which Azure resources are deployed and managed.
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **gridResourceGroup** in the **westus2** location.
+Create a resource group by using the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named `gridResourceGroup` in the `westus2` location.
-> [!NOTE]
-> Select **Try it** next to the CLI example to launch Cloud Shell in the right pane. Select **Copy** button to copy the command, paste it in the Cloud Shell window, and then press ENTER to run the command.
+Select **Open Cloud Shell** to open Azure Cloud Shell on the right pane. Select the **Copy** button to copy the command, paste it in Cloud Shell, and then select the Enter key to run the command.
```azurecli-interactive az group create --name gridResourceGroup --location westus2
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a Domain Name System (DNS) entry.
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group.
-1. Specify a name for the topic.
+Replace `<TOPIC NAME>` with a unique name for your custom topic. The Event Grid topic name must be unique because a Domain Name System (DNS) entry represents it.
+
+1. Specify a name for the topic:
```azurecli-interactive topicname="<TOPIC NAME>"
- ```
-1. Run the following command to create the topic.
+ ```
+
+1. Run the following command to create the topic:
```azurecli-interactive az eventgrid topic create --name $topicname -l westus2 -g gridResourceGroup ```
-## Create Queue storage
+## Create a queue
-Before subscribing to the custom topic, let's create the endpoint for the event message. You create a Queue storage for collecting the events.
+Before you subscribe to the custom topic, create the endpoint for the event message. You create a queue for collecting the events.
-1. Specify a unique name for the Azure Storage account.
+1. Specify a unique name for the Azure storage account:
```azurecli-interactive storagename="<STORAGE ACCOUNT NAME>" ```
-1. Run the following commands to create an Azure Storage account and a queue (named `eventqueue`) in the storage.
+
+1. Run the following commands to create a storage account and a queue (named `eventqueue`) in the storage:
```azurecli-interactive queuename="eventqueue"
Before subscribing to the custom topic, let's create the endpoint for the event
## Subscribe to a custom topic
-The following example subscribes to the custom topic you created, and passes the resource ID of the Queue storage for the endpoint. With Azure CLI, you pass the Queue storage ID as the endpoint. The endpoint is in the format:
+The following example subscribes to the custom topic that you created, and it passes the resource ID of the queue for the endpoint. With the Azure CLI, you pass the queue ID as the endpoint. The endpoint is in this format:
`/subscriptions/<AZURE SUBSCRIPTION ID>/resourcegroups/<RESOURCE GROUP NAME>/providers/Microsoft.Storage/storageAccounts/<STORAGE ACCOUNT NAME>/queueservices/default/queues/<QUEUE NAME>`
-The following script gets the resource ID of the storage account for the queue. It constructs the ID for the queue storage, and subscribes to an Event Grid topic. It sets the endpoint type to `storagequeue` and uses the queue ID for the endpoint.
+The following script gets the resource ID of the storage account for the queue. It constructs the queue ID and subscribes to an Event Grid topic. It sets the endpoint type to `storagequeue` and uses the queue ID for the endpoint.
-
-> [!IMPORTANT]
-> Replace expiration date placeholder (`<yyyy-mm-dd>`) with an actual value. For example: `2022-11-17` before running the command.
+Before you run the command, replace the placeholder for the [expiration date](concepts.md#event-subscription-expiration) (`<yyyy-mm-dd>`) with an actual value for the year, month, and day.
```azurecli-interactive storageid=$(az storage account show --name $storagename --resource-group gridResourceGroup --query id --output tsv)
az eventgrid event-subscription create \
--expiration-date "<yyyy-mm-dd>" ```
-The account that creates the event subscription must have write access to the queue storage. Notice that an [expiration date](concepts.md#event-subscription-expiration) is set for the subscription.
+The account that creates the event subscription must have write access to the queue. Notice that an expiration date is set for the subscription.
-If you use the REST API to create the subscription, you pass the ID of the storage account and the name of the queue as a separate parameter.
+If you use the REST API to create the subscription, you pass the ID of the storage account and the name of the queue as a separate parameter:
```json "destination": {
If you use the REST API to create the subscription, you pass the ID of the stora
## Send an event to your custom topic
-Let's trigger an event to see how Event Grid distributes the message to your endpoint. First, let's get the URL and key for the custom topic.
+Trigger an event to see how Event Grid distributes the message to your endpoint. First, get the URL and key for the custom topic:
```azurecli-interactive endpoint=$(az eventgrid topic show --name $topicname -g gridResourceGroup --query "endpoint" --output tsv) key=$(az eventgrid topic key list --name $topicname -g gridResourceGroup --query "key1" --output tsv) ```
-To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, you use CURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
+For the sake of simplicity in this article, you use sample event data to send to the custom topic. Typically, an application or an Azure service would send the event data.
+
+The cURL tool sends HTTP requests. In this article, you use cURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
```azurecli-interactive for i in 1 2 3
do
done ```
-Navigate to the Queue storage in the portal, and notice that Event Grid sent those three events to the queue.
+Go to the queue in the Azure portal, and notice that Event Grid sent those three events to the queue.
## Clean up resources
-If you plan to continue working with this event, don't clean up the resources created in this article. Otherwise, use the following command to delete the resources you created in this article.
+
+If you plan to continue working with this event, don't clean up the resources that you created in this article. Otherwise, use the following command to delete the resources:
```azurecli-interactive az group delete --name gridResourceGroup ```
-## Next steps
+## Related content
Now that you know how to create topics and event subscriptions, learn more about what Event Grid can help you do: - [About Event Grid](overview.md)-- [Route Blob storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json)
+- [Route Azure Blob Storage events to a custom web endpoint](../storage/blobs/storage-blob-event-quickstart.md?toc=%2fazure%2fevent-grid%2ftoc.json)
- [Monitor virtual machine changes with Azure Event Grid and Logic Apps](monitor-virtual-machine-changes-logic-app.md) - [Stream big data into a data warehouse](event-hubs-integration.md)
-See the following samples to learn about publishing events to and consuming events from Event Grid using different programming languages.
+To learn about publishing events to, and consuming events from, Event Grid by using various programming languages, see the following samples:
- [Azure Event Grid samples for .NET](/samples/azure/azure-sdk-for-net/azure-event-grid-sdk-samples/) - [Azure Event Grid samples for Java](/samples/azure/azure-sdk-for-java/eventgrid-samples/)
event-grid Monitor Virtual Machine Changes Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-virtual-machine-changes-logic-app.md
Last updated 06/10/2022
# Tutorial: Monitor virtual machine changes by using Azure Event Grid and Azure Logic Apps You can monitor and respond to specific events that happen in Azure resources or external resources by using Azure Event Grid and Azure Logic Apps. You can create an automated [Consumption logic app workflow](../logic-apps/logic-apps-overview.md) with minimal code using Azure Logic Apps. You can have these resources publish events to [Azure Event Grid](../event-grid/overview.md). In turn, Azure Event Grid pushes those events to subscribers that have queues, webhooks, or [event hubs](../event-hubs/event-hubs-about.md) as endpoints. As a subscriber, your workflow waits for these events to arrive in Azure Event Grid before running the steps to process the events.
event-grid Publish Deliver Events With Namespace Topics Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-deliver-events-with-namespace-topics-portal.md
The article provides step-by-step instructions to publish events to Azure Event
To be specific, you use Azure portal and Curl to publish events to a namespace topic in Event Grid and push those events from an event subscription to an Event Hubs handler destination. For more information about the push delivery model, see [Push delivery overview](push-delivery-overview.md). ## Create an Event Grid namespace
event-grid Publish Deliver Events With Namespace Topics Webhook Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-deliver-events-with-namespace-topics-webhook-portal.md
The article provides step-by-step instructions to publish events to Azure Event
> [!NOTE] > Azure Event Grid namespaces currently supports Shared Access Signatures (SAS) token and access keys authentication. ## Create an Event Grid namespace
event-grid Publish Deliver Events With Namespace Topics Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-deliver-events-with-namespace-topics-webhook.md
The article provides step-by-step instructions to publish events to Azure Event
> The Azure [CLI Event Grid extension](/cli/azure/eventgrid) doesn't yet support namespaces and any of the resources it contains. We will use [Azure CLI resource](/cli/azure/resource) to create Event Grid resources. ## Prerequisites
event-grid Publish Deliver Events With Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-deliver-events-with-namespace-topics.md
The article provides step-by-step instructions to publish events to Azure Event
> [!NOTE] > The Azure [CLI Event Grid extension](/cli/azure/eventgrid) doesn't yet support namespaces and any of the resources it contains. We will use [Azure CLI resource](/cli/azure/resource) to create Event Grid resources. ## Prerequisites
event-grid Publish Events Namespace Topics Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-namespace-topics-portal.md
Then, you use Curl to do the following tasks to test the setup.
1. Receive the event from the subscription. 1. Acknowledge the event in the subscription. ## Create a namespace
event-grid Publish Events Using Namespace Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-using-namespace-topics.md
Last updated 02/20/2024
This article provides a quick introduction to pull delivery using the ``curl`` bash shell command to publish, receive, and acknowledge events. Event Grid resources are created using CLI commands. This article is suitable for a quick test of the pull delivery functionality. For sample code using the data plane SDKs, see the [.NET](event-grid-dotnet-get-started-pull-delivery.md) or the Java samples. For Java, we provide the sample code in two articles: [publish events](publish-events-to-namespace-topics-java.md) and [receive events](receive-events-from-namespace-topics-java.md) quickstarts. For more information about the pull delivery model, see the [concepts](concepts-event-grid-namespaces.md) and [pull delivery overview](pull-delivery-overview.md) articles. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
event-grid Publish Iot Hub Events To Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-iot-hub-events-to-logic-apps.md
Azure Event Grid enables you to react to events in IoT Hub by triggering actions
This article walks through a sample configuration that uses IoT Hub and Event Grid. At the end, you have an Azure logic app set up to send a notification email every time a device connects or disconnects to your IoT hub. Event Grid can be used to get timely notification about critical devices disconnecting. Metrics and Diagnostics can take several minutes (such as 20 minutes or more) to show up in logs / alerts. Longer processing times might be unacceptable for critical infrastructure. ## Prerequisites
event-grid Query Event Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/query-event-subscriptions.md
This article describes how to list the Event Grid subscriptions in your Azure subscription. When querying your existing Event Grid subscriptions, it's important to understand the different types of subscriptions. You provide different parameters based on the type of subscription you want to get. ## Resource groups and Azure subscriptions
event-grid Cli Subscribe Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/cli-subscribe-custom-topic.md
This article provides a sample Azure CLI script that shows how to create a custom topic and send an event to the custom topic using Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This article provides a sample Azure CLI script that shows how to create a custo
## Clean up resources ```azurecli az group delete --name $resourceGroup
event-grid Powershell Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-azure-subscription.md
Last updated 09/15/2021
This script creates an Event Grid subscription to the events for an Azure subscription. ## Sample script - stable [!code-powershell[main](../../../powershell_scripts/event-grid/subscribe-to-azure-subscription/subscribe-to-azure-subscription.ps1 "Subscribe to Azure subscription")]
event-grid Powershell Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-blob.md
Last updated 09/15/2021
This script creates an Event Grid subscription to the events for a Blob storage account. ## Sample script - stable
event-grid Powershell Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-create-custom-topic.md
Last updated 09/15/2021
This script creates an Event Grid custom topic. ## Sample script
event-grid Powershell Resource Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-resource-group-filter.md
Last updated 09/15/2021
This script creates an Event Grid subscription to the events for a resource group. It uses a filter to get only events for a specified resource in the resource group. ## Sample script - stable [!code-powershell[main](../../../powershell_scripts/event-grid/filter-events/filter-events.ps1 "Filter events")]
event-grid Powershell Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-resource-group.md
Last updated 09/15/2021
This script creates an Event Grid subscription to the events for a resource group. The preview sample script requires the Event Grid module. To install, run `Install-Module -Name AzureRM.EventGrid -AllowPrerelease -Force -Repository PSGallery` ## Sample script - stable [!code-powershell[main](../../../powershell_scripts/event-grid/subscribe-to-resource-group/subscribe-to-resource-group.ps1 "Subscribe to resource group")]
event-grid Powershell Subscribe Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-subscribe-custom-topic.md
Last updated 09/15/2021
This script creates an Event Grid subscription to the events for a custom topic. The preview sample script requires the Event Grid module. To install, run `Install-Module -Name AzureRM.EventGrid -AllowPrerelease -Force -Repository PSGallery` ## Sample script - stable [!code-powershell[main](../../../powershell_scripts/event-grid/subscribe-to-custom-topic/subscribe-to-custom-topic.ps1 "Subscribe to custom topic")]
event-hubs Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-application.md
Title: Authenticate an application to access Azure Event Hubs resources
+ Title: Authenticate an application to access resources
description: This article provides information about authenticating an application with Microsoft Entra ID to access Azure Event Hubs resources- Previously updated : 02/08/2023+ Last updated : 06/26/2024
+#customer intent: As a developer, I want to know how to authenticate an application with Azure Event Hubs using Microsoft Entra ID.
# Authenticate an application with Microsoft Entra ID to access Event Hubs resources
-Microsoft Azure provides integrated access control management for resources and applications based on Microsoft Entra ID. A key advantage of using Microsoft Entra ID with Azure Event Hubs is that you don't need to store your credentials in the code anymore. Instead, you can request an OAuth 2.0 access token from the Microsoft identity platform. The resource name to request a token is `https://eventhubs.azure.net/`, and it's the same for all clouds/tenants (For Kafka clients, the resource to request a token is `https://<namespace>.servicebus.windows.net`). Microsoft Entra authenticates the security principal (a user, group, or service principal) running the application. If the authentication succeeds, Microsoft Entra ID returns an access token to the application, and the application can then use the access token to authorize request to Azure Event Hubs resources.
+Microsoft Azure provides integrated access control management for resources and applications based on Microsoft Entra ID. A key advantage of using Microsoft Entra ID with Azure Event Hubs is that you don't need to store your credentials in the code anymore. Instead, you can request an OAuth 2.0 access token from the Microsoft identity platform. The resource name to request a token is `https://eventhubs.azure.net/`, and it's the same for all clouds/tenants (For Kafka clients, the resource to request a token is `https://<namespace>.servicebus.windows.net`). Microsoft Entra authenticates the security principal (a user, group, service principal, or managed identity) running the application. If the authentication succeeds, Microsoft Entra ID returns an access token to the application, and the application can then use the access token to authorize request to Azure Event Hubs resources.
When a role is assigned to a Microsoft Entra security principal, Azure grants access to those resources for that security principal. Access can be scoped to the level of subscription, the resource group, the Event Hubs namespace, or any resource under it. A Microsoft Entra security can assign roles to a user, a group, an application service principal, or a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
When a role is assigned to a Microsoft Entra security principal, Azure grants ac
Azure provides the following Azure built-in roles for authorizing access to Event Hubs data using Microsoft Entra ID and OAuth: - [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner): Use this role to give complete access to Event Hubs resources.-- [Azure Event Hubs Data Sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender): Use this role to give access to Event Hubs resources.-- [Azure Event Hubs Data Receiver](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-receiver): Use this role to give receiving access to Event Hubs resources.
+- [Azure Event Hubs Data Sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender): A security principal assigned to this role can send events to a specific event hub or all event hubs in a namespace.
+- [Azure Event Hubs Data Receiver](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-receiver): A security principal assigned to this role can receive events from a specific event hub or all event hubs in a namespace.
For Schema Registry built-in roles, see [Schema Registry roles](schema-registry-concepts.md#azure-role-based-access-control).
The following sections show you how to configure your native application or web
For an overview of the OAuth 2.0 code grant flow, see [Authorize access to Microsoft Entra web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md).
-<a name='register-your-application-with-an-azure-ad-tenant'></a>
### Register your application with a Microsoft Entra tenant The first step in using Microsoft Entra ID to authorize Event Hubs resources is registering your client application with a Microsoft Entra tenant from the [Azure portal](https://portal.azure.com/). Follow steps in the [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md) to register an application in Microsoft Entra ID that represents your application trying to access Event Hubs resources.
-When you register your client application, you supply information about the application to AD. Microsoft Entra ID then provides a client ID (also called an application ID) that you can use to associate your application with Microsoft Entra runtime. To learn more about the client ID, see [Application and service principal objects in Microsoft Entra ID](../active-directory/develop/app-objects-and-service-principals.md).
+When you register your client application, you supply information about the application. Microsoft Entra ID then provides a client ID (also called an application ID) that you can use to associate your application with Microsoft Entra runtime. To learn more about the client ID, see [Application and service principal objects in Microsoft Entra ID](../active-directory/develop/app-objects-and-service-principals.md).
> [!Note] > If you register your application as a native application, you can specify any valid URI for the Redirect URI. For native applications, this value does not have to be a real URL. For web applications, the redirect URI must be a valid URI, because it specifies the URL to which tokens are provided.
-After you've registered your application, you'll see the **Application (client) ID** under **Settings**:
+After you register your application, you see the **Application (client) ID** under **Settings**:
### Create a client secret
The application needs a client secret to prove its identity when requesting a to
## Assign Azure roles using the Azure portal Assign one of the [Event Hubs roles](#built-in-roles-for-azure-event-hubs) to the application's service principal at the desired scope (Event Hubs namespace, resource group, subscription). For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
-Once you define the role and its scope, you can test this behavior with samples [in this GitHub location](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac). To learn more on managing access to Azure resources using Azure RBAC and the Azure portal, see [this article](..//role-based-access-control/role-assignments-portal.yml).
+Once you define the role and its scope, you can test this behavior with samples [in this GitHub location](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac). To learn more on managing access to Azure resources using Azure role-based access control (RBAC) and the Azure portal, see [this article](..//role-based-access-control/role-assignments-portal.yml).
### Client libraries for token acquisition
-Once you've registered your application and granted it permissions to send/receive data in Azure Event Hubs, you can add code to your application to authenticate a security principal and acquire OAuth 2.0 token. To authenticate and acquire the token, you can use either one of the [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md) or another open-source library that supports OpenID or Connect 1.0. Your application can then use the access token to authorize a request against Azure Event Hubs.
+Once you registered your application and granted it permissions to send/receive data in Azure Event Hubs, you can add code to your application to authenticate a security principal and acquire OAuth 2.0 token. To authenticate and acquire the token, you can use either one of the [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md) or another open-source library that supports OpenID or Connect 1.0. Your application can then use the access token to authorize a request against Azure Event Hubs.
For scenarios where acquiring tokens is supported, see the [Scenarios](https://aka.ms/msal-net-scenarios) section of the [Microsoft Authentication Library (MSAL) for .NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) GitHub repository.
For scenarios where acquiring tokens is supported, see the [Scenarios](https://a
- [RBAC sample using the legacy Java com.microsoft.azure.eventhubs package](https://github.com/Azure/azure-event-hubs/tree/master/samples/Jav) to migrate this sample to use the new package (`com.azure.messaging.eventhubs`). To learn more about using the new package in general, see samples [here](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/eventhubs/azure-messaging-eventhubs/src/samples/java/com/azure/messaging/eventhubs).
-## Next steps
+## Related content
- To learn more about Azure RBAC, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)? - To learn how to assign and manage Azure role assignments with Azure PowerShell, Azure CLI, or the REST API, see these articles: - [Add or remove Azure role assignments using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
event-hubs Authenticate Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-managed-identity.md
Title: Authentication a managed identity with Microsoft Entra ID
+ Title: Authenticate using managed identity
description: This article provides information about authenticating a managed identity with Microsoft Entra ID to access Azure Event Hubs resources- Previously updated : 02/08/2023+ Last updated : 06/26/2024
+#customer intent: As a developer, I want to know how to authenticate to an Azure event hub using a managed identity.
# Authenticate a managed identity with Microsoft Entra ID to access Event Hubs Resources Azure Event Hubs supports Microsoft Entra authentication with [managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). Managed identities for Azure resources can authorize access to Event Hubs resources using Microsoft Entra credentials from applications running in Azure Virtual Machines (VMs), Function apps, Virtual Machine Scale Sets, and other services. By using managed identities for Azure resources together with Microsoft Entra authentication, you can avoid storing credentials with your applications that run in the cloud. This article shows how to authorize access to an event hub by using a managed identity from an Azure VM. ## Enable managed identities on a VM
-Before you use managed identities for Azure resources to access Event Hubs resources from your VM, you must first enable managed identities for Azure Resources on the VM. To learn how to enable managed identities for Azure resources, see one of these articles:
--- [Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)-- [Azure PowerShell](../active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md)-- [Azure CLI](../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md)-- [Azure Resource Manager template](../active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md)-- [Azure Resource Manager client libraries](../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md)-
-<a name='grant-permissions-to-a-managed-identity-in-azure-ad'></a>
+Before you use managed identities for Azure resources to access Event Hubs resources from your VM, you must first enable managed identities for Azure Resources on the VM. To learn how to enable managed identities for Azure resources, see [Configure managed identities on Azure VMs](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md).
## Grant permissions to a managed identity in Microsoft Entra ID
-To authorize a request to Event Hubs service from a managed identity in your application, first configure Azure role-based access control (Azure RBAC) settings for that managed identity. Azure Event Hubs defines Azure roles that encompass permissions for sending and reading from Event Hubs. When the Azure role is assigned to a managed identity, the managed identity is granted access to Event Hubs data at the appropriate scope. For more information about assigning Azure roles, see [Authenticate with Microsoft Entra ID for access to Event Hubs resources](authorize-access-azure-active-directory.md).
+To authorize a request to Event Hubs service from a managed identity in your application, first configure Azure role-based access control (RBAC) settings for that managed identity. Azure Event Hubs defines Azure roles that encompass permissions for sending events to and receiving events from Event Hubs. When an Azure role is assigned to a managed identity, the managed identity is granted access to Event Hubs data at the appropriate scope. For more information about assigning Azure roles, see [Authenticate with Microsoft Entra ID for access to Event Hubs resources](authorize-access-azure-active-directory.md).
-## Use Event Hubs with managed identities
-To use Event Hubs with managed identities, assign an Event Hubs RBAC role at the appropriate scope to the identity. The procedure in this section uses a simple application that runs under a managed identity and accesses Event Hubs resources.
+## Sample application
+The procedure in this section uses a simple application that runs under a managed identity and accesses Event Hubs resources.
Here we're using a sample web application hosted in [Azure App Service](https://azure.microsoft.com/services/app-service/). For step-by-step instructions for creating a web application, see [Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md)
Assign one of the [Event Hubs roles](authorize-access-azure-active-directory.md#
4. Assign this identity to the **Event Hubs Data Owner** role at the namespace level or event hub level. 5. Run the web application, enter the namespace name and event hub name, a message, and select **Send**. To receive the event, select **Receive**.
-#### [Azure.Messaging.EventHubs (latest)](#tab/latest)
-You can now launch your web application and point your browser to the sample aspx page. You can find the sample web application that sends and receives data from Event Hubs resources in the [GitHub repo](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp).
+You can find the sample web application that sends and receives data from Event Hubs resources in the [GitHub repo](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp).
Install the latest package from [NuGet](https://www.nuget.org/packages/Azure.Messaging.EventHubs/), and start sending events to Event Hubs using **EventHubProducerClient** and receiving events using **EventHubConsumerClient**.
protected async void btnReceive_Click(object sender, EventArgs e)
} ```
-#### [Microsoft.Azure.EventHubs (legacy)](#tab/old)
-You can now launch your web application and point your browser to the sample aspx page. You can find the sample web application that sends and receives data from Event Hubs resources in the [GitHub repo](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac/ManagedIdentityWebApp).
-
-Install the latest package from [NuGet](https://www.nuget.org/packages/Microsoft.Azure.EventHubs/), and start sending to and receiving data from Event hubs using the EventHubClient as shown in the following code:
-
-```csharp
-var ehClient = EventHubClient.CreateWithManagedIdentity(new Uri($"sb://{EventHubNamespace}/"), EventHubName);
-```
- ## Event Hubs for Kafka You can use Apache Kafka applications to send messages to and receive messages from Azure Event Hubs using managed identity OAuth. See the following sample on GitHub: [Event Hubs for Kafka - send and receive messages using managed identity OAuth](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/managedidentity).
You can use Apache Kafka applications to send messages to and receive messages f
- To learn how to use the Apache Kafka protocol to send events to and receive events from an event hub using a managed identity, see [Event Hubs for Kafka sample to send and receive messages using a managed identity](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/managedidentity).
-.
-## Next steps
+## Related content
- See the following article to learn about managed identities for Azure resources: [What is managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md) - See the following related articles: - [Authenticate requests to Azure Event Hubs from an application using Microsoft Entra ID](authenticate-application.md)
event-hubs Authenticate Shared Access Signature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-shared-access-signature.md
Title: Authenticate access to Azure Event Hubs with shared access signatures description: This article shows you how to authenticate access to Event Hubs resources using shared access signatures. Previously updated : 03/13/2023 Last updated : 06/25/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, php
Shared access signature (SAS) gives you granular control over the type of access
This article covers authenticating the access to Event Hubs resources using SAS. To learn about **authorizing** access to Event Hubs resources using SAS, see [this article](authorize-access-shared-access-signature.md). > [!NOTE]
-> Microsoft recommends that you use Microsoft Entra credentials when possible as a security best practice, rather than using the shared access signatures, which can be more easily compromised. While you can continue to use shared access signatures (SAS) to grant fine-grained access to your Event Hubs resources, Microsoft Entra ID offers similar capabilities without the need to manage SAS tokens or worry about revoking a compromised SAS.
+> We recommend that you use Microsoft Entra credentials when possible as a security best practice, rather than using the shared access signatures, which can be more easily compromised. While you can continue to use shared access signatures (SAS) to grant fine-grained access to your Event Hubs resources, Microsoft Entra ID offers similar capabilities without the need to manage SAS tokens or worry about revoking a compromised SAS.
> > For more information about Microsoft Entra integration in Azure Event Hubs, see [Authorize access to Event Hubs using Microsoft Entra ID](authorize-access-azure-active-directory.md). ## Configuring for SAS authentication
-You can configure a SAS rule on an Event Hubs namespace, or an entity (event hub instance or Kafka Topic in an event hub). Configuring a SAS rule on a consumer group is currently not supported, but you can use rules configured on a namespace or entity to secure access to consumer group.
+You can configure a SAS rule on an Event Hubs namespace, or an entity (event hub or Kafka topic). Configuring a SAS rule on a consumer group is currently not supported, but you can use rules configured on a namespace or entity to secure access to consumer group. The following image shows how the authorization rules apply on sample entities.
-The following image shows how the authorization rules apply on sample entities.
-
-![Configure authorization rule](./media/authenticate-shared-access-signature/configure-sas-authorization-rule.png)
+![Diagram that shows event hubs with listen, send, and manage rules.](./media/authenticate-shared-access-signature/configure-sas-authorization-rule.png)
In this example, the sample Event Hubs namespace (ExampleNamespace) has two entities: eh1 and Kafka topic1. The authorization rules are defined both at the entity level and also at the namespace level.
-The manageRuleNS, sendRuleNS, and listenRuleNS authorization rules apply to both eh1 and t1. The listenRule-eh and sendRule-eh authorization rules apply only to eh1 and sendRuleT authorization rule applies only to topic1.
+The manageRuleNS, sendRuleNS, and listenRuleNS authorization rules apply to both eh1 and topic1. The listenRule-eh and sendRule-eh authorization rules apply only to eh1 and sendRuleT authorization rule applies only to topic1.
-When you use sendRuleNS authorization rule, client applications can send to both eh1 and topic1. When sendRuleT authorization rule is used, it enforces granular access to topic1 only and hence client applications using this rule for access now can't send to eh1, but only to topic1.
+When you use sendRuleNS authorization rule, client applications can send to both eh1 and topic1. When sendRuleT authorization rule is used, it enforces granular access to topic1 only and hence client applications using this rule for access now can't send to eh1, but only to topic1.
## Generate a Shared Access Signature token Any client that has access to name of an authorization rule name and one of its signing keys can generate a SAS token. The token is generated by crafting a string in the following format:
Any client that has access to name of an authorization rule name and one of its
- `sr` ΓÇô URI of the resource being accessed. - `sig` ΓÇô Signature.
-The signature-string is the SHA-256 hash computed over the resource URI (scope as described in the previous section) and the string representation of the token expiry instant, separated by CRLF. The hash computation looks similar to the following pseudo code and returns a 256-bit/32-byte hash value.
+The signature-string is the SHA-256 hash computed over the resource URI (scope as described in the previous section) and the string representation of the token expiry instant, separated by carriage return and line feed (CRLF). The hash computation looks similar to the following pseudo code and returns a 256-bit/32-byte hash value.
``` SHA-256('https://<yournamespace>.servicebus.windows.net/'+'\n'+ 1438205742)
For example, to define authorization rules scoped down to only sending/publishin
> [!NOTE]
-> Although it's not recommended, it is possible to equip devices with tokens that grant access to an event hub or a namespace. Any device that holds this token can send messages directly to that event hub. Furthermore, the device cannot be blocklisted from sending to that event hub.
+> Although we don't recommend it, it's possible to equip devices with tokens that grant access to an event hub or a namespace. Any device that holds this token can send messages directly to that event hub. Furthermore, the device cannot be blocklisted from sending to that event hub.
>
-> It's always recommended to give specific and granular scopes.
+> We recommend that you give specific and granular scopes.
> [!IMPORTANT] > Once the tokens have been created, each client is provisioned with its own unique token.
For example, to define authorization rules scoped down to only sending/publishin
## Authenticating Event Hubs consumers with SAS
-To authenticate back-end applications that consume from the data generated by Event Hubs producers, Event Hubs token authentication requires its clients to either have the **manage** rights or the **listen** privileges assigned to its Event Hubs namespace or event hub instance or topic. Data is consumed from Event Hubs using consumer groups. While SAS policy gives you granular scope, this scope is defined only at the entity level and not at the consumer level. It means that the privileges defined at the namespace level or the event hub instance or topic level will be applied to the consumer groups of that entity.
+To authenticate back-end applications that consume from the data generated by Event Hubs producers, Event Hubs token authentication requires its clients to either have the **manage** rights or the **listen** privileges assigned to its Event Hubs namespace or event hub instance or topic. Data is consumed from Event Hubs using consumer groups. While SAS policy gives you granular scope, this scope is defined only at the entity level and not at the consumer level. It means that the privileges defined at the namespace level or the event hub or topic level are applied to the consumer groups of that entity.
-## Disabling Local/SAS Key authentication
+## Disable local/SAS Key authentication
For certain organizational security requirements, you want to disable local/SAS key authentication completely and rely on the Microsoft Entra ID based authentication, which is the recommended way to connect with Azure Event Hubs. You can disable local/SAS key authentication at the Event Hubs namespace level using Azure portal or Azure Resource Manager template.
-### Disabling Local/SAS Key authentication via the portal
+### Disable local/SAS Key authentication via the portal
You can disable local/SAS key authentication for a given Event Hubs namespace using the Azure portal.
-As shown in the following image, in the namespace overview section, select **Local Authentication**.
+1. Navigate to your Event Hubs namespace in the Azure portal.
+1. On the **Overview** page, select **Enabled** for **Local Authentication** as shown in the following image.
-![Namespace overview for disabling local auth](./media/authenticate-shared-access-signature/disable-local-auth-overview.png)
+ :::image type="content" source="./media/authenticate-shared-access-signature/disable-local-auth-overview.png" alt-text="Screenshot that shows the Local Authentication selected." lightbox="./media/authenticate-shared-access-signature/disable-local-auth-overview.png":::
+1. On the **Local Authentication** popup, select **Disabled**, and select **OK**.
-And then select **Disabled** option and select **Ok** as shown in the following image.
-![Disabling local auth](./media/authenticate-shared-access-signature/disabling-local-auth.png)
+ ![Screenshot that shows the Local Authentication popup with the Disabled option selected.](./media/authenticate-shared-access-signature/disabling-local-auth.png)
-### Disabling Local/SAS Key authentication using a template
+### Disable local/SAS Key authentication using a template
You can disable local authentication for a given Event Hubs namespace by setting `disableLocalAuth` property to `true` as shown in the following Azure Resource Manager template (ARM Template). ```json
You can disable local authentication for a given Event Hubs namespace by setting
See the following articles: - [Authorize using SAS](authenticate-shared-access-signature.md)-- [Authorize using Azure role-based access control (Azure RBAC)](authorize-access-azure-active-directory.md)
+- [Authorize using Azure role-based access control (RBAC)](authorize-access-azure-active-directory.md)
- [Learn more about Event Hubs](event-hubs-about.md) See the following related articles: - [Authenticate requests to Azure Event Hubs from an application using Microsoft Entra ID](authenticate-application.md)-- [Authenticate a managed identity with Microsoft Entra ID to access Event Hubs Resources](authenticate-managed-identity.md)
+- [Authenticate a managed identity with Microsoft Entra ID for accessing Event Hubs Resources](authenticate-managed-identity.md)
- [Authorize access to Event Hubs resources using Microsoft Entra ID](authorize-access-azure-active-directory.md) - [Authorize access to Event Hubs resources using Shared Access Signatures](authorize-access-shared-access-signature.md)
event-hubs Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authorize-access-azure-active-directory.md
Title: Authorize access with Microsoft Entra ID description: This article provides information on authorizing access to Event Hubs resources using Microsoft Entra ID. - Previously updated : 12/11/2023+ Last updated : 06/26/2024
+#customer intent: As an Azure Event Hubs user, I want to know how to authorize requests to event hubs using Microsoft Entra ID.
# Authorize access to Event Hubs resources using Microsoft Entra ID
-Azure Event Hubs supports using Microsoft Entra ID to authorize requests to Event Hubs resources. With Microsoft Entra ID, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which can be a user, or an application service principal. To learn more about roles and role assignments, see [Understanding the different roles](../role-based-access-control/overview.md).
+Azure Event Hubs supports using Microsoft Entra ID to authorize requests to Event Hubs resources. With Microsoft Entra ID, you can use Azure role-based access control (RBAC) to grant permissions to a security principal, which can be a user, or an application service principal. To learn more about roles and role assignments, see [Understanding the different roles](../role-based-access-control/overview.md).
## Overview When a security principal (a user, or an application) attempts to access an Event Hubs resource, the request must be authorized. With Microsoft Entra ID, access to a resource is a two-step process.
For more information about how built-in roles are defined, see [Understand role
- [Event Hubs for Kafka - OAuth samples](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth).
-## Next steps
+## Related content
- Learn how to assign an Azure built-in role to a security principal, see [Authenticate access to Event Hubs resources using Microsoft Entra ID](authenticate-application.md). - Learn [how to create custom roles with Azure RBAC](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac/CustomRole). - Learn [how to use Microsoft Entra ID with EH](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac/AzureEventHubsSDK)
event-hubs Authorize Access Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authorize-access-event-hubs.md
Title: Authorize access to Azure Event Hubs description: This article provides information about different options for authorizing access to Azure Event Hubs resources. - Previously updated : 03/13/2023+ Last updated : 06/26/2024
+#customer intent: As an Azure Event Hubs user, I want to know how to authorize requests to event hubs.
# Authorize access to Azure Event Hubs
-Every time you publish or consume events from an event hub, your client is trying to access Event Hubs resources. Every request to a secure resource must be authorized so that the service can ensure that the client has the required permissions to publish or consume the data.
+Every time you publish events to or consume events from an event hub, your client is trying to access Event Hubs resources. Every request to a secure resource must be authorized so that the service can ensure that the client has the required permissions to publish or consume the data.
Azure Event Hubs offers the following options for authorizing access to secure resources:
Azure Event Hubs offers the following options for authorizing access to secure r
> [!NOTE] > This article applies to both Event Hubs and [Apache Kafka](azure-event-hubs-kafka-overview.md) scenarios.
-<a name='azure-active-directory'></a>
## Microsoft Entra ID
-Microsoft Entra integration with Event Hubs resources provides Azure role-based access control (Azure RBAC) for fine-grained control over a client's access to resources. You can use Azure RBAC to grant permissions to security principal, which may be a user, a group, or an application service principal. Microsoft Entra authenticates the security principal and returns an OAuth 2.0 token. The token can be used to authorize a request to access an Event Hubs resource.
+Microsoft Entra integration with Event Hubs resources provides Azure role-based access control (RBAC) for fine-grained control over a client's access to resources. You can use Azure RBAC to grant permissions to security principal, which may be a user, a group, or an application service principal. Microsoft Entra authenticates the security principal and returns an OAuth 2.0 token. The token can be used to authorize a request to access an Event Hubs resource.
For more information about authenticating with Microsoft Entra ID, see the following articles:
event-hubs Authorize Access Shared Access Signature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authorize-access-shared-access-signature.md
Title: Authorize access with a shared access signature in Azure Event Hubs
+ Title: Authorize access with shared access signatures
description: This article provides information about authorizing access to Azure Event Hubs resources by using Shared Access Signatures (SAS).- Previously updated : 03/13/2023+ Last updated : 06/26/2024
+#customer intent: As an Azure Event Hubs user, I want to know how to authorize requests to event hubs using Shared Access Signatures (SAS).
# Authorizing access to Event Hubs resources using Shared Access Signatures
SAS is a claim-based authorization mechanism using simple tokens. When you use S
## Shared access authorization policies Each Event Hubs namespace and each Event Hubs entity (an event hub or a Kafka topic) has a shared access authorization policy made up of rules. The policy at the namespace level applies to all entities inside the namespace, irrespective of their individual policy configuration.+ For each authorization policy rule, you decide on three pieces of information: name, scope, and rights. The name is a unique name in that scope. The scope is the URI of the resource in question. For an Event Hubs namespace, the scope is the fully qualified domain name (FQDN), such as `https://<yournamespace>.servicebus.windows.net/`. The rights provided by the policy rule can be a combination of:+ - **Send** ΓÇô Gives the right to send messages to the entity - **Listen** ΓÇô Gives the right to listen or receive messages from the entity - **Manage** ΓÇô Gives the right to manage the topology of the namespace, including creation and deletion of entities. The **Manage** right includes the **Send** and **Listen** rights. A namespace or entity policy can hold up to 12 shared access authorization rules, providing room for the three sets of rules, each covering the basic rights, and the combination of Send and Listen. This limit underlines that the SAS policy store isn't intended to be a user or service account store. If your application needs to grant access to Event Hubs resources based on user or service identities, it should implement a security token service that issues SAS tokens after an authentication and access check.
-An authorization rule is assigned a **primary key** and a **secondary key**. These keys are cryptographically strong keys. DonΓÇÖt lose them or leak them. TheyΓÇÖll always be available in the Azure portal. You can use either of the generated keys, and you can regenerate them at any time. If you regenerate or change a key in the policy, all previously issued tokens based on that key become instantly invalid. However, ongoing connections created based on such tokens will continue to work until the token expires.
+An authorization rule is assigned a **primary key** and a **secondary key**. These keys are cryptographically strong keys. DonΓÇÖt lose them or leak them. TheyΓÇÖll always be available in the Azure portal. You can use either of the generated keys, and you can regenerate them at any time. If you regenerate or change a key in the policy, all previously issued tokens based on that key become instantly invalid. However, ongoing connections created based on such tokens continue to work until the token expires.
-When you create an Event Hubs namespace, a policy rule named **RootManageSharedAccessKey** is automatically created for the namespace. This policy has **manage** permissions for the entire namespace. ItΓÇÖs recommended that you treat this rule like an administrative root account and donΓÇÖt use it in your application. You can create additional policy rules in the **Configure** tab for the namespace in the portal, via PowerShell or Azure CLI.
+When you create an Event Hubs namespace, a policy rule named **RootManageSharedAccessKey** is automatically created for the namespace. This policy has **manage** permissions for the entire namespace. ItΓÇÖs recommended that you treat this rule like an administrative root account and donΓÇÖt use it in your application. You can create more policy rules in the **Configure** tab for the namespace in the portal, via PowerShell or Azure CLI.
## Best practices when using SAS When you use shared access signatures in your applications, you need to be aware of two potential risks: - If a SAS is leaked, it can be used by anyone who obtains it, which can potentially compromise your Event Hubs resources.-- If a SAS provided to a client application expires and the application is unable to retrieve a new SAS from your service, then applicationΓÇÖs functionality may be hindered.
+- If a SAS provided to a client application expires and the application is unable to retrieve a new SAS from your service, then applicationΓÇÖs functionality might be hindered.
The following recommendations for using shared access signatures can help mitigate these risks: -- **Have clients automatically renew the SAS if necessary**: Clients should renew the SAS well before expiration, to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a small number of immediate, short-lived operations that are expected to be completed within the expiration period, then it may be unnecessary as the SAS isn't expected to be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as previously stated) with the need to ensure that client is requesting renewal early enough (to avoid disruption due to the SAS expiring prior to a successful renewal).-- **Be careful with the SAS start time**: If you set the start time for SAS to **now**, then due to clock skew (differences in current time according to different machines), failures may be observed intermittently for the first few minutes. In general, set the start time to be at least 15 minutes in the past. Or, donΓÇÖt set it at all, which will make it valid immediately in all cases. The same generally applies to the expiry time as well. Remember that you may observe up to 15 minutes of clock skew in either direction on any request.
+- **Have clients automatically renew the SAS if necessary**: Clients should renew the SAS well before expiration, to allow time for retries if the service providing the SAS is unavailable. If your SAS is meant to be used for a small number of immediate, short-lived operations that are expected to be completed within the expiration period, then it might be unnecessary as the SAS isn't expected to be renewed. However, if you have client that is routinely making requests via SAS, then the possibility of expiration comes into play. The key consideration is to balance the need for the SAS to be short-lived (as previously stated) with the need to ensure that client is requesting renewal early enough (to avoid disruption due to the SAS expiring before a successful renewal).
+- **Be careful with the SAS start time**: If you set the start time for SAS to **now**, then due to clock skew (differences in current time according to different machines), failures might be observed intermittently for the first few minutes. In general, set the start time to be at least 15 minutes in the past. Or, donΓÇÖt set it at all, which make it valid immediately in all cases. The same generally applies to the expiry time as well. Remember that you might observe up to 15 minutes of clock skew in either direction on any request.
- **Be specific with the resource to be accessed**: A security best practice is to provide user with the minimum required privileges. If a user only needs read access to a single entity, then grant them read access to that single entity, and not read/write/delete access to all entities. It also helps lessen the damage if a SAS is compromised because the SAS has less power in the hands of an attacker.-- **DonΓÇÖt always use SAS**: Sometimes the risks associated with a particular operation against your Event Hubs outweigh the benefits of SAS. For such operations, create a middle-tier service that writes to your Event Hubs after business rule validation, authentication, and auditing.
+- **DonΓÇÖt always use SAS**: Sometimes the risks associated with a particular operation against your Event Hubs outweigh the benefits of SAS. For such operations, create a middle-tier service that writes to your event hubs after business rule validation, authentication, and auditing.
- **Always use HTTPs**: Always use Https to create or distribute a SAS. If a SAS is passed over HTTP and intercepted, an attacker performing a man-in-the-middle attack is able to read the SAS and then use it just as the intended user could have, potentially compromising sensitive data or allowing for data corruption by the malicious user. ## Conclusion Share access signatures are useful for providing limited permissions to Event Hubs resources to your clients. They're vital part of the security model for any application using Azure Event Hubs. If you follow the best practices listed in this article, you can use SAS to provide greater flexibility of access to your resources, without compromising the security of your application.
-## Next steps
+## Related content
See the following related articles:
+- [Authenticate requests to Azure Event Hubs using Shared Access Signatures](authenticate-shared-access-signature.md)
- [Authenticate requests to Azure Event Hubs from an application using Microsoft Entra ID](authenticate-application.md) - [Authenticate a managed identity with Microsoft Entra ID to access Event Hubs Resources](authenticate-managed-identity.md)-- [Authenticate requests to Azure Event Hubs using Shared Access Signatures](authenticate-shared-access-signature.md)-- [Authorize access to Event Hubs resources using Microsoft Entra ID](authorize-access-azure-active-directory.md)
event-hubs Event Hubs Bicep Namespace Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-bicep-namespace-event-hub.md
Last updated 03/22/2022
Azure Event Hubs is a Big Data streaming platform and event ingestion service, capable of receiving and processing millions of events per second. Event Hubs can process and store events, data, or telemetry produced by distributed software and devices. Data sent to an event hub can be transformed and stored using any real-time analytics provider or batching/storage adapters. For detailed overview of Event Hubs, see [Event Hubs overview](event-hubs-about.md) and [Event Hubs features](event-hubs-features.md). In this quickstart, you create an event hub by using [Bicep](../azure-resource-manager/bicep/overview.md). You deploy a Bicep file to create a namespace of type [Event Hubs](./event-hubs-about.md), with one event hub. ## Prerequisites
event-hubs Event Hubs Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-overview.md
The capture feature is included in the premium tier so there's no extra charge f
Capture doesn't consume egress quota as it is billed separately. ## Integration with Event Grid
-You can create an Azure Event Grid subscription with an Event Hubs namespace as its source. The following tutorial shows you how to create an Event Grid subscription with an event hub as a source and an Azure Functions app as a sink: [Process and migrate captured Event Hubs data to an Azure Synapse Analytics using Event Grid and Azure Functions](store-captured-data-data-warehouse.md).
+You can create an Azure Event Grid subscription with an Event Hubs namespace as its source. The following tutorial shows you how to create an Event Grid subscription with an event hub as a source and an Azure Functions app as a sink: [Process and migrate captured Event Hubs data to an Azure Synapse Analytics using Event Grid and Azure Functions](../event-grid/event-hubs-integration.md).
## Explore captured files To learn how to explore captured Avro files, see [Explore captured Avro files](explore-captured-avro-files.md).
event-hubs Event Hubs Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-cli.md
In this quickstart, you will create an event hub using Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
event-hubs Event Hubs Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-powershell.md
In this quickstart, you create an event hub using Azure PowerShell.
## Prerequisites An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you're using PowerShell locally, you must run the latest version of PowerShell to complete this quickstart. If you need to install or upgrade, see [Install and Configure Azure PowerShell](/powershell/azure/install-azure-powershell).
event-hubs Event Hubs Resource Manager Namespace Event Hub Enable Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md
Creates a namespace of type `Microsoft.EventHub/Namespaces`, with one event hub,
## PowerShell Deploy your template to enable Event Hubs Capture into Azure Storage:
event-hubs Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/geo-replication.md
Title: 'Azure Event Hubs geo-replication' description: 'This article describes the Azure Event Hubs geo-replication feature' ++ Last updated 06/10/2024 # Geo-replication (Public Preview)
-There are two features that provide Geo-disaster recovery in Azure Event Hubs. There's ***Geo-disaster recovery*** (Metadata DR) that just provides replication of metadata and then a second feature, in public preview, ***Geo-replication*** that provides replication of both metadata and the data itself. Neither geo-disaster recovery feature should be confused with Availability Zones. Both geographic recovery features provide resilience between Azure regions such as East US and West US. Availability Zone support provides resilience within a specific geographic region, such as East US. For more details on Availability Zones, read the documentation here: [Event Hubs Availability Zone support](./event-hubs-availability-and-consistency.md).
+There are two features that provide geo-disaster recovery in Azure Event Hubs.
- > [!IMPORTANT]
+- ***Geo-disaster recovery*** (Metadata DR), which just provides replication of **only metadata**.
+- ***Geo-replication*** (public preview), which provides replication of **both metadata and the data**.
+
+These features shouldn't be confused with Availability Zones. Both geographic recovery features provide resilience between Azure regions such as East US and West US. Availability Zone support provides resilience within a specific geographic region, such as East US. For more information on Availability Zones, see [Event Hubs Availability Zone support](./event-hubs-availability-and-consistency.md).
+
+> [!IMPORTANT]
> - This feature is currently in public preview, and as such shouldn't be used in production scenarios.
-> - The below regions are currently supported in the public preview.
+> - The following regions are currently supported in the public preview.
> > | US | Europe | > |||
There are two features that provide Geo-disaster recovery in Azure Event Hubs. T
> | | Spain Central | > | | Norway East |
-**High level feature differences**
+## Metadata disaster recovery vs. Geo-replication of metadata and data
The Metadata DR feature replicates configuration information for a namespace from a primary namespace to a secondary namespace. It supports a one time only failover to the secondary region. During customer initiated failover, the alias name for the namespace is repointed to the secondary namespace and then the pairing is broken. No data is replicated other than configuration information nor are permission assignments replicated.
-The Geo-replication feature replicates configuration information and all of the data from a primary namespace to one, or more secondary namespaces. When a failover is performed, the selected secondary becomes the primary and the previous primary becomes a secondary. Users can perform a failover back to the original primary when desired.
+The newer Geo-replication feature replicates configuration information and all of the data from a primary namespace to one, or more secondary namespaces. When a failover is performed, the selected secondary becomes the primary and the previous primary becomes a secondary. Users can perform a failover back to the original primary when desired.
-This rest of this document is focused on the Geo-replication feature. For details on the metadata DR feature, read [Event Hubs Geo-disater recovery for metadata](./event-hubs-geo-dr.md).
+This rest of this article focuses on the Geo-replication feature. For details on the metadata DR feature, see [Event Hubs Geo-disater recovery for metadata](./event-hubs-geo-dr.md).
## Geo-replication
-The public preview of the Geo-replication feature is supported for namespaces in Event Hubs self-serve scaling Dedicated clusters. You can use the feature with new, or existing namespaces in dedicated self-serve clusters. The following features aren't supported with Geo-replication:
-- Customer Managed Keys (CMK)-- Managed Identity for Capture-- VNet features (service endpoints, or private endpoints)
+The public preview of the Geo-replication feature is supported for namespaces in Event Hubs self-serve scaling dedicated clusters. You can use the feature with new, or existing namespaces in dedicated self-serve clusters. The following features aren't supported with Geo-replication:
+
+- Customer-managed keys (CMK)
+- Managed identity for capture
+- Virtual network features (service endpoints, or private endpoints)
- Large messages support (now in public preview) - Kafka Transactions (now in public preview)
-Some of the key aspects of Geo Data Replication public preview are:
-- Primary-secondary replication model ΓÇô Geo-replication is built on primary-secondary replication model, where at a given time thereΓÇÖs only one Primary namespace that serves event producers and event consumers. -- Event Hubs performs fully managed byte-to-byte replication of metadata, event data and consumer offset across secondaries with the configured consistency levels. -- Stable namespace FQDN ΓÇô The FQDN does not need to change when promotion is performed.
+Some of the key aspects of Geo-data Replication public preview are:
+
+- Primary-secondary replication model ΓÇô Geo-replication is built on primary-secondary replication model, where at a given time thereΓÇÖs only one primary namespace that serves event producers and event consumers.
+- Event Hubs performs fully managed byte-to-byte replication of metadata, event data, and consumer offset across secondaries with the configured consistency levels.
+- Stable namespace fully qualified domain name (FQDN) ΓÇô The FQDN doesn't need to change when promotion is performed.
- Replication consistency - There are two replication consistency settings, synchronous and asynchronous. - User-managed promotion of a secondary to being the new primary. Changing a secondary to being a new primary is done two ways:-- Planned: a promotion of the secondary to primary where traffic isn't processed until the new primary catches up with all of the data held by the former primary instance.-- Forced: as a failover where the secondary becomes primary as fast as possible.
-The Geo-replication feature replicates all data and metadata from the primary region to the selected secondary regions. The namespace FQDN always points to the primary region.
- :::image type="content" source="./media/geo-replication/a-as-primary.png" alt-text="Diagram showing when region A is primary, B is secondary.":::
+- **Planned**: a promotion of the secondary to primary where traffic isn't processed until the new primary catches up with all of the data held by the former primary instance.
+- **Forced**: as a failover where the secondary becomes primary as fast as possible. The Geo-replication feature replicates all data and metadata from the primary region to the selected secondary regions. The namespace FQDN always points to the primary region.
+
-When a customer initiates a promotion of a secondary, the FQDN points to the region selected to be the new primary. The old primary then becomes a secondary. You can promote your secondary to be the new primary for reasons other than a failover. Those reasons can include application upgrades, failover testing, or any number of other things. In those situations, it's common to switch back when those activities are completed.
+When you initiate a promotion of a secondary, the FQDN points to the region selected to be the new primary. The old primary then becomes a secondary. You can promote your secondary to be the new primary for reasons other than a failover. Those reasons can include application upgrades, failover testing, or any number of other things. In those situations, it's common to switch back when those activities are completed.
:::image type="content" source="./media/geo-replication/b-as-primary.png" alt-text="Diagram showing when B is made the primary, that A becomes the new secondary.":::
-Secondary regions are added, or removed at the customer's discretion.
-There are some current limitations worth noting:
+Secondary regions are added, or removed at the customer's discretion. There are some current limitations worth noting:
+ - There's no ability to support read-only views on secondary regions. - There's no automatic promotion/failover capability. All promotions are customer initiated. - Secondary regions must be different from the primary region. You can't select another dedicated cluster in the same region.
With asynchronous replication enabled, all messages are committed in the primary
**Synchronous replication**
-When synchronous replication is enabled, published events are replicated to the secondary, which must confirm the message before it's committed in the primary. With synchronous replication, your application publishes at the rate it takes to publish, replicate, acknowledge and commit. It also means that your application is tied to the availability of both regions. If the secondary region goes down, messages can't be acknowledged or committed.
+When synchronous replication is enabled, published events are replicated to the secondary, which must confirm the message before it's committed in the primary. With synchronous replication, your application publishes at the rate it takes to publish, replicate, acknowledge, and commit. It also means that your application is tied to the availability of both regions. If the secondary region goes down, messages can't be acknowledged or committed.
**Replication consistency comparison** With synchronous replication:+ - Latency is longer due to the distributed commit. - Availability is tied to the availability of two regions. If one region goes down, your namespace is unavailable. - Received data always resides in at least two regions (only two regions supported in the initial public preview). Synchronous replication provides the greatest assurance that your data is safe. If you have synchronous replication, then when it's committed, then it's committed in all of the regions configured for Geo-replication. When synchronous replication is enabled though, your application availability can be reduced due to depending on the availability of both regions. + Enabling asynchronous replication doesn't impact latency very much, and service availability isn't impacted by the loss of a secondary region. Asynchronous replication doesnΓÇÖt have the absolute guarantee that all regions have the data before it's committed it like synchronous replication does. You can also set the amount of time that your secondary can be out of sync before incoming traffic is throttled. The setting can be from 5 minutes to 1,440 minutes, which is one day. If you're looking to use regions with a large distance between them, then asynchronous replication is likely the best option for you.
-Replication consistency configuration can change after Geo-replication configuration. You can go from synchronous to asynchronous, or from asynchronous to synchronous. If you go from synchronous to asynchronous, your latency, and application availability improves. If you go from asynchronous to synchronous, your secondary becomes configured as synchronous after lag reaches zero. If you're running with a continual lag for whatever reason, then you may need to pause your publishers in order for lag to reach zero and your mode to be able to switch to synchronous.
+Replication consistency configuration can change after Geo-replication configuration. You can go from synchronous to asynchronous, or from asynchronous to synchronous. If you go from synchronous to asynchronous, your latency, and application availability improves. If you go from asynchronous to synchronous, your secondary becomes configured as synchronous after lag reaches zero. If you're running with a continual lag for whatever reason, then you might need to pause your publishers in order for lag to reach zero and your mode to be able to switch to synchronous.
The general reasons to have synchronous replication enabled are tied to the importance of the data, specific business needs, or compliance reasons. If your primary goal is application availability rather than data assurance, then asynchronous consistency is likely the better choice.
To enable the Geo-replication feature, you need to use a primary and secondary r
The Geo-replication feature depends on being able to replicate published events from the primary to the secondary region. If the secondary region is on another continent, it has a major impact on replication lag from the primary to the secondary region. If using Geo-replication for availability and reliability reasons, you're best off with secondary regions being at least on the same continent where possible. To get a better understanding of the latency induced by geographic distance, you can learn more from [Azure network round-trip latency statistics | Microsoft Learn](../networking/azure-network-latency.md). ## Geo-replication management
-The Geo-replication feature enables customers to configure a secondary region to replicate configuration and data to. Customers can:
-- Configure Geo-replication- Secondary regions can be configured on any existing namespace in a self-serve dedicated cluster in a region with the Geo-replication feature set enabled. It can also be configured during namespace creation on the same dedicated clusters. To select a secondary region, you must have a dedicated cluster available in that secondary region and the secondary region also must have the Geo-replication feature set enabled for that region.-- Configure the replication consistency - Synchronous and asynchronous replication is set when Geo-replication is configured but can also be switched afterwards. With asynchronous consistency, you can configure the amount of time that a secondary region is allowed to lag.-- Trigger promotion/failover - All promotions, or failovers are customer initiated. During promotion you can choose to make it Forced from the start, or even change your mind after a promotion has started and make it forced.-- Remove a secondary - If at any time you want to remove the geo-pairing between primary and secondary regions, you can do so and the data in the secondary region will be deleted.
+The Geo-replication feature enables you to configure a secondary region to replicate configuration and data to. You can:
+
+- **Configure Geo-replication** - Secondary regions can be configured on any existing namespace in a self-serve dedicated cluster in a region with the Geo-replication feature set enabled. It can also be configured during namespace creation on the same dedicated clusters. To select a secondary region, you must have a dedicated cluster available in that secondary region and the secondary region also must have the Geo-replication feature set enabled for that region.
+- **Configure the replication consistency** - Synchronous and asynchronous replication is set when Geo-replication is configured but can also be switched afterwards. With asynchronous consistency, you can configure the amount of time that a secondary region is allowed to lag.
+- **Trigger promotion/failover** - All promotions, or failovers are customer initiated. During promotion you can choose to make it Forced from the start, or even change your mind after a promotion has started and make it forced.
+- **Remove a secondary** - If at any time you want to remove the geo-pairing between primary and secondary regions, you can do so and the data in the secondary region will be deleted.
## Monitoring data replication Users can monitor the progress of the replication job by monitoring the replication lag metric in Application Metrics logs.+ - Enable Application Metrics logs in your Event Hubs namespace following [Monitoring Azure Event Hubs - Azure Event Hubs | Microsoft Learn](./monitor-event-hubs.md). - Once Application Metrics logs are enabled, you need to produce and consume data from namespace for a few minutes before you start to see the logs. -- To view Application Metrics logs, navigate to Monitoring section of Event Hubs and click on the ΓÇÿLogsΓÇÖ blade. You can use the following query to find the replication lag (in seconds) between the primary and secondary namespaces.
-```
- AzureDiagnostics
- | where TimeGenerated > ago(1h)
- | where Category == "ApplicationMetricsLogs"
- | where ActivityName_s == "ReplicationLag
-```
-- The column count_d indicates the replication lag in seconds between the primary and secondary region.
+- To view Application Metrics logs, navigate to **Monitoring** section of Event Hubs page, and select **Logs** on the left menu. You can use the following query to find the replication lag (in seconds) between the primary and secondary namespaces.
+
+ ```kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(1h)
+ | where Category == "ApplicationMetricsLogs"
+ | where ActivityName_s == "ReplicationLag
+ ```
+- The column `count_d` indicates the replication lag in seconds between the primary and secondary region.
## Publishing Data Event publishing applications can publish data to geo-replicated namespaces via stable namespace FQDN of the geo replicated namespace. The event publishing approach is the same as the non-Geo DR case and no changes to client applications are required.
-Event publishing may not be available during the following circumstances:
-- During Failover grace period, the existing primary region rejects any new events that are published to Event Hubs. -- When replication lag between primary and secondary regions reaches the max replication lag duration, the publisher ingress workload may get throttled. +
+Event publishing might not be available during the following circumstances:
+
+- During Failover grace period, the existing primary region rejects any new events that are published to the event hub.
+- When replication lag between primary and secondary regions reaches the max replication lag duration, the publisher ingress workload might get throttled.
Publisher applications can't directly access any namespaces in the secondary regions.
-**Consuming Data**
+## Consuming Data
Event consuming applications can consume data using the stable namespace FQDN of a geo-replicated namespace. The consumer operations aren't supported, from when the failover is initiated until it's completed. ### Checkpointing/Offset Management
Event consuming applications can continue to maintain offset management as they
**Kafka**
-Offset are committed to Event Hubs directly and offsets are replicated across regions. Therefore, consumers can start consuming from where it left off in the primary region.
+Offsets are committed to Event Hubs directly and offsets are replicated across regions. Therefore, consumers can start consuming from where it left off in the primary region.
**Event Hubs SDK/AMQP** Clients that use the Event Hubs SDK need to upgrade to the April 2024 version of the SDK. The latest version of the Event Hubs SDK supports failover with an update to the checkpoint. The checkpoint is managed by users with a checkpoint store such as Azure Blob storage, or a custom storage solution. If there's a failover, the checkpoint store must be available from the secondary region so that clients can retrieve checkpoint data and avoid loss of messages. ## Pricing
-Event Hubs dedicated clusters are priced independently of geo-replication. Use of geo-replication with Event Hubs dedicated requires you to have at least two dedicated clusters in separate regions. The dedicated clusters used as secondary instances for geo-replication can be used for other workloads.
-There is a charge for geo-replication based on the published bandwidth * the number of secondary regions. The geo-replication charge is waived in early public preview.
+Event Hubs dedicated clusters are priced independently of geo-replication. Use of geo-replication with Event Hubs dedicated requires you to have at least two dedicated clusters in separate regions. The dedicated clusters used as secondary instances for geo-replication can be used for other workloads. There's a charge for geo-replication based on the published bandwidth * the number of secondary regions. The geo-replication charge is waived in early public preview.
+
+## Related content
+To learn how to use the Geo-replication feature, see [Use Geo-replication](use-geo-replication.md).
event-hubs Process Data Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/process-data-azure-stream-analytics.md
Title: Process data from Event Hubs Azure using Stream Analytics | Microsoft Docs
+ Title: Process data using Stream Analytics
description: This article shows you how to process data from your Azure event hub using an Azure Stream Analytics job. Previously updated : 05/22/2023 Last updated : 06/26/2024
+#customer intent: As a developer, I want to know how process event data in an event hub using an Azure Stream Analytics job.
Here are the key benefits of Azure Event Hubs and Azure Stream Analytics integra
> [!IMPORTANT] > - If you aren't a member of [owner](../role-based-access-control/built-in-roles.md#owner) or [contributor](../role-based-access-control/built-in-roles.md#contributor) roles at the Azure subscription level, you must be a member of the [Stream Analytics Query Tester](../role-based-access-control/built-in-roles.md#stream-analytics-query-tester) role at the Azure subscription level to successfully complete steps in this section. This role allows you to perform testing queries without creating a stream analytics job first. For instructions on assigning a role to a user, see [Assign AD roles to users](../active-directory/roles/manage-roles-portal.md). > - If your event hub allows only the private access via private endpoints, you must have the Stream Analytics job joined to the same network so that the job can access events in the event hub. + 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Navigate to your **Event Hubs namespace** and then navigate to the **event hub**, which has the incoming data.
-1. Select **Process Data** on the event hub page or select **Process data** on the left menu.
-
- :::image type="content" source="./media/process-data-azure-stream-analytics/process-data-tile.png" alt-text="Screenshot showing the Process data page for the event hub." lightbox="./media/process-data-azure-stream-analytics/process-data-tile.png":::
-1. Select **Start** on the **Enable real-time insights from events** tile.
+1. On the left navigation menu, expand **Features**, and select **Process data**, and then select **Start** on the **Enable real time insights from events** tile.
:::image type="content" source="./media/process-data-azure-stream-analytics/process-data-page-explore-stream-analytics.png" alt-text="Screenshot showing the Process data page with Enable real time insights from events tile selected." lightbox="./media/process-data-azure-stream-analytics/process-data-page-explore-stream-analytics.png":::
-1. You see a query page with values already set for the following fields:
+1. You see a query page with values already set for the following fields. If you see a popup window about a consumer group and a policy being created for you, select **OK**. You immediately see a snapshot of the latest incoming data in this tab.
1. Your **event hub** as an input for the query. 1. Sample **SQL query** with SELECT statement. 1. An **output** alias to refer to your query test results.
- :::image type="content" source="./media/process-data-azure-stream-analytics/query-editor.png" alt-text="Screenshot showing the Query editor for your Stream Analytics query." lightbox="./media/process-data-azure-stream-analytics/query-editor.png":::
-
- > [!NOTE]
- > When you use this feature for the first time, this page asks for your permission to create a consumer group and a policy for your event hub to preview incoming data.
-1. Select **Create** in the **Input preview** pane as shown in the preceding image.
-1. You immediately see a snapshot of the latest incoming data in this tab.
+ :::image type="content" source="./media/process-data-azure-stream-analytics/query-editor.png" alt-text="Screenshot showing the Query editor for your Stream Analytics query." lightbox="./media/process-data-azure-stream-analytics/query-editor.png":::
+ - The serialization type in your data is automatically detected (JSON/CSV). You can manually change it as well to JSON/CSV/AVRO. - You can preview incoming data in the table format or raw format. - If your data shown isn't current, select **Refresh** to see the latest events. -
- Here's an example of data in the **table format**:
-
- :::image type="content" source="./media/process-data-azure-stream-analytics/snapshot-results.png" alt-text="Screenshot of the Input preview window in the result pane of the Process data page in a table format." lightbox="./media/process-data-azure-stream-analytics/snapshot-results.png":::
-
- Here's an example of data in the **raw format**:
+ - In the preceding image, the results are shown in the table format. To see the raw data, select **Raw**
:::image type="content" source="./media/process-data-azure-stream-analytics/snapshot-results-raw-format.png" alt-text="Screenshot of the Input preview window in the result pane of the Process data page in the raw format." lightbox="./media/process-data-azure-stream-analytics/snapshot-results-raw-format.png"::: 1. Select **Test query** to see the snapshot of test results of your query in the **Test results** tab. You can also download the results. :::image type="content" source="./media/process-data-azure-stream-analytics/test-results.png" alt-text="Screenshot of the Input preview window in the result pane with test results." lightbox="./media/process-data-azure-stream-analytics/test-results.png":::
-1. Write your own query to transform the data. See [Stream Analytics Query Language reference](/stream-analytics-query/stream-analytics-query-language-reference).
-1. Once you've tested the query and you want to move it in to production, select **Create Stream Analytics job**.
+
+ Write your own query to transform the data. See [Stream Analytics Query Language reference](/stream-analytics-query/stream-analytics-query-language-reference).
+1. Once you tested the query and you want to move it in to production, select **Create Stream Analytics job**.
:::image type="content" source="./media/process-data-azure-stream-analytics/create-job-link.png" alt-text="Screenshot of the Query page with the Create Stream Analytics job link selected."::: 1. On the **New Stream Analytics job** page, follow these steps:
Your Azure Stream Analytics job defaults to three streaming units (SUs). To adju
:::image type="content" source="./media/process-data-azure-stream-analytics/scale.png" alt-text="Screenshots showing the Scale page for a Stream Analytics job." lightbox="./media/process-data-azure-stream-analytics/scale.png":::
-## Next steps
+## Related content
To learn more about Stream Analytics queries, see [Stream Analytics Query Language](/stream-analytics-query/built-in-functions-azure-stream-analytics)
event-hubs Use Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/use-geo-replication.md
Title: 'How to use Azure Event Hubs geo-replication' description: 'This article describes how to use the Azure Event Hubs geo-replication feature' ++ Last updated 06/10/2024 # How to use Geo-replication (Public Preview) This tutorial shows you how to use the Geo-replication with your Event Hubs Dedicated namespace. To learn more about this feature, read the Geo-replication article. In this article you learn how to:+ - Enable Geo-replication on a new namespace. - Enable Geo-replication on an existing namespace. - Perform a planned promotion or failover.
To use the Geo-replication feature, you need to have at least one Dedicated Even
You can enable Geo-replication during namespace creation and after namespace creation. To enable Geo-replication on a namespace during namespace creation:
-1. Click on ΓÇÿNamespaceΓÇÖ to create a new Event Hubs namespace in an Event Hubs cluster in a region with Geo-replication enabled. Provide a name for the namespace and check the Enable Geo-replication box.
+1. Navigate to the **Event Hubs Cluster** page for your Event Hubs cluster.
+1. On the left menu, expand **Entities**, and select **Cluster Namespaces**.
+1. To create an Event Hubs namespace in an Event Hubs cluster in a region with Geo-replication enabled, on the **Cluster Namespaces** page, on the toolbar, select **+ Namespace**. Provide a name for the namespace, and select **Enable Geo-replication**.
- :::image type="content" source="./media/use-geo-replication/namespace-create.png" alt-text="Screenshot of dedicated namespace create UI with geo-replication UI.":::
-
-2. Click on Add secondary region and select a secondary region and a corresponding Event Hubs Dedicated cluster running in that region.
+ :::image type="content" source="./media/use-geo-replication/namespace-create.png" alt-text="Screenshot of dedicated namespace create UI with geo-replication UI.":::
+2. Select **Add secondary region**, and select a secondary region and a corresponding Event Hubs dedicated cluster running in that region.
- :::image type="content" source="./media/use-geo-replication/region-selection.png" alt-text="Screenshot of secondary region and cluster selection in namespace create UI.":::
+ :::image type="content" source="./media/use-geo-replication/region-selection.png" alt-text="Screenshot of secondary region and cluster selection in namespace create UI.":::
-3. Select asynchronous or synchronous replication mode as the replication consistency mode. If selecting asynchronous consistency, enter the allowable amount of time the secondary region can lag behind the primary region in minutes.
+3. Select asynchronous or synchronous **replication mode** as the replication consistency mode. If you select asynchronous consistency, enter the allowable amount of time the secondary region can lag behind the primary region in minutes.
- :::image type="content" source="./media/use-geo-replication/create-replication-consistency.png" alt-text="Screenshot of replication consistency UI in dedicated namespace create UI.":::
-
-4. Then click on ΓÇÿCreateΓÇÖ to create the Geo Replicated Event Hubs namespace. The deployment takes a couple of minutes to complete.
-5. Once the namespace is created, you can navigate to it and click on "Geo-replication" tab to see your Geo-replication configuration.
+ :::image type="content" source="./media/use-geo-replication/create-replication-consistency.png" alt-text="Screenshot of replication consistency UI in dedicated namespace create UI.":::
+4. Then, select **Create** to create the Geo-replicated Event Hubs namespace. The deployment takes a couple of minutes to complete.
+5. Once the namespace is created, you can navigate to it and select **Geo-replication** on the left menu to see your Geo-replication configuration.
- :::image type="content" source="./media/use-geo-replication/geo-replication.png" alt-text="Screenshot of geo-replication UI that shows configuration and allows various actions.":::
+ :::image type="content" source="./media/use-geo-replication/geo-replication.png" alt-text="Screenshot of geo-replication UI that shows configuration and allows various actions.":::
## Enable Geo-replication on an existing namespace
-1. Go into the namespace in the portal and click on "Geo-replication".
-2. Click on Add secondary region and select a secondary region and the corresponding Event Hubs Dedicated clusters running in that region.
-3. Select asynchronous or synchronous replication mode as the replication consistency mode. If selecting asynchronous consistency, enter the allowable amount of time the secondary region can lag behind the primary region in minutes.
- :::image type="content" source="./media/use-geo-replication/geo-replication-consistency.png" alt-text="Screenshot of replication consistency UI in geo-replication UI.":::
+1. Navigate to your Event Hubs namespace in the Azure portal, and select **Geo-replication** on the left menu.
+2. Select **Add secondary region**, and select a secondary region and the corresponding Event Hubs Dedicated clusters running in that region.
+3. Select asynchronous or synchronous replication mode as the replication consistency mode. If selecting asynchronous consistency, enter the allowable amount of time the secondary region can lag behind the primary region in minutes.
+
+ :::image type="content" source="./media/use-geo-replication/geo-replication-consistency.png" alt-text="Screenshot of replication consistency UI in geo-replication UI.":::
-After a secondary region is added, all of the data held in the primary namespace is replicated to the secondary. Complete replication can take a while depending on various factors with the main one being how much data is in your primary namespace. Users can observe replication progress by monitoring the lag to the secondary region.
+ After a secondary region is added, all of the data held in the primary namespace is replicated to the secondary. Complete replication can take a while depending on various factors with the main one being how much data is in your primary namespace. Users can observe replication progress by monitoring the lag to the secondary region.
## Promote secondary You can promote your configured secondary region to being the primary region. When you promote a secondary region to primary, the current primary region becomes the secondary region. A promotion can be planned or forced. Planned promotions ensure both regions are caught up before accepting new traffic. Forced promotions take effect as quickly as possible and doesn't wait for things to be caught up. To initiate a promotion of your secondary region to primary, select failover icon.
- :::image type="content" source="./media/use-geo-replication/promotion-a.png" alt-text="Screenshot of the promotion UI selection in the geo-replication UI.":::
-When in the promotion flow, you can select planned or forced. You can also choose to select forced after starting a planned promotion. Enter the word "promote" in the prompt to be able to start the promotion.
+When in the promotion flow, you can select planned or forced. You can also choose to select forced after starting a planned promotion. Enter the word **promote** in the prompt to be able to start the promotion.
- :::image type="content" source="./media/use-geo-replication/promotion.png" alt-text="Screenshot of the promotion UI in where you can select planned or forced.":::
-If doing a planned promotion, then once the promotion process is initiated, the new primary rejects any new events until failover is completed. The promotion process repoints the fully qualified domain name(FQDN) for your namespace to the selected region, complete data replication between the two regions and configure the new primary region to be active. Promotion does not require any changes to clients, and that they continue to work after the promotion event.
+If doing a planned promotion, then once the promotion process is initiated, the new primary rejects any new events until failover is completed. The promotion process repoints the fully qualified domain name(FQDN) for your namespace to the selected region, complete data replication between the two regions and configure the new primary region to be active. Promotion doesn't require any changes to clients, and that they continue to work after the promotion event.
In the case where your primary region goes down completely, you can still perform a forced promotion. ## Remove a secondary
-To remove a Geo-replication pairing with a secondary, go into "Geo-replication", select the secondary region, and then select remove. At the prompt, enter the word "delete" and then you can delete the secondary.
+To remove a Geo-replication pairing with a secondary, select **Geo-replication** on the left menu, select the secondary region, and then select **Remove**. At the prompt, enter the word **delete**, and then you can delete the secondary.
- :::image type="content" source="./media/use-geo-replication/remove-secondary.png" alt-text="Screenshot of the remove secondary function in the geo-replcation UI.":::
When a secondary region is removed, all of the data that it held is also removed. If you wish to re-enable Geo-replication with that region and cluster, it has to replicate the primary region data all over again.
+## Related content
+For conceptual information about the Geo-replication feature, see [Azure Event Hubs geo-replication](geo-replication.md).
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
ExpressRoute virtual network gateway is designed to exchange network routes and
### Circuits
-FastPath is available on all ExpressRoute circuits. Support for virtual network peering and UDR over FastPath is now generally available. Limited general availability (GA) support for Private Endpoint/Private Link connectivity is only available for connections associated to ExpressRoute Direct circuits.
+FastPath is available on all ExpressRoute circuits. Support for virtual network peering and UDR over FastPath is now generally available and only for connections associated to ExpressRoute Direct circuits. Limited general availability (GA) support for Private Endpoint/Private Link connectivity is only available for connections associated to ExpressRoute Direct circuits.
### Gateways
While FastPath supports most configurations, it doesn't support the following fe
> [!NOTE] > * ExpressRoute Direct has a cumulative limit at the port level. > * Traffic flows through the ExpressRoute gateway when these IP limits are reached.
+> * You can configure alerts through Azure Monitor to notify when the [number of FastPath routes](expressroute-monitoring-metrics-alerts.md#fastpath-routes-count-at-circuit-level) are nearing the threshold limit.
## Limited General Availability (GA)+ FastPath support for Private Endpoint/Private Link connectivity is available for limited scenarios for 100/10Gbps ExpressRoute Direct connections. Virtual Network Peering and UDR support are available globally across all Azure regions. Private Endpoint/ Private Link connectivity is available in the following Azure regions: - Australia East - East Asia
For more information about supported scenarios and to enroll in the limited GA o
## Next steps -- To enable FastPath, see [Configure ExpressRoute FastPath](expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
+- To enable FastPath, see configure ExpressRoute FastPath using the [Azure portal](expressroute-howto-linkvnet-portal-resource-manager.md#configure-expressroute-fastpath) or using [Azure PowerShell](expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
expressroute Expressroute Howto Add Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-ipv6-cli.md
This article describes how to add IPv6 support to connect via ExpressRoute to yo
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Install the latest version of the CLI commands (2.0 or later). For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli). ## Add IPv6 Private Peering to your ExpressRoute circuit
expressroute Expressroute Howto Circuit Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-arm.md
This quickstart shows you how to create an ExpressRoute circuit using PowerShell
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Azure PowerShell installed locally or Azure Cloud Shell ## <a name="create"></a>Create and provision an ExpressRoute circuit
expressroute Expressroute Howto Routing Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-routing-arm.md
In this tutorial, you learn how to:
* [Workflows](expressroute-workflows.md) * You must have an active ExpressRoute circuit. Follow the instructions to [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) and have the circuit enabled by your connectivity provider before you continue. The ExpressRoute circuit must be in a provisioned and enabled state for you to run the cmdlets in this article. ## <a name="msft"></a>Microsoft peering
expressroute Expressroute Migration Classic Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-migration-classic-resource-manager.md
This article explains how to migrate ExpressRoute-associated virtual networks fr
## Before you begin * Verify that you have the latest versions of the Azure PowerShell modules. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/). To install the PowerShell classic deployment model module (which is needed for the classic deployment model), see [Installing the Azure PowerShell classic deployment model Module](/powershell/azure/servicemanagement/install-azure-ps). * Make sure that you review the [prerequisites](expressroute-prerequisites.md), [routing requirements](expressroute-routing.md), and [workflows](expressroute-workflows.md) before you begin configuration.
expressroute How To Configure Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-configure-connection-monitor.md
This article helps you configure a Connection Monitor extension to monitor ExpressRoute. Connection Monitor is a cloud-based network monitoring solution that monitors connectivity between Azure cloud deployments and on-premises locations (Branch offices, etc.). Connection Monitor is part of Azure Monitor logs. The extension also lets you monitor network connectivity for your private and Microsoft peering connections. When you configure Connection Monitor for ExpressRoute, you can detect network issues to identify and eliminate. With Connection Monitor for ExpressRoute you can:
expressroute How To Npm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-npm.md
This article helps you configure a Network Performance Monitor extension to moni
> [!IMPORTANT] > Starting 1 July 2021, you will not be able to add new tests in an existing workspace or enable a new workspace in Network Performance Monitor. You will also not be able to add new connection monitors in Connection Monitor (classic). You can continue to use the tests and connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, [migrate your tests from Network Performance Monitor](../network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md) or [migrate from Connection Monitor (classic)](../network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md) to the new Connection Monitor in Azure Network Watcher before February 29, 2024. You can:
expressroute How To Routefilter Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-routefilter-cli.md
To successfully connect to services through Microsoft peering, you must complete
* [Create an ExpressRoute circuit](howto-circuit-cli.md) and have the circuit enabled by your connectivity provider before you continue. The ExpressRoute circuit must be in a provisioned and enabled state. * [Create Microsoft peering](howto-routing-cli.md) if you manage the BGP session directly. Or, have your connectivity provider provision Microsoft peering for your circuit. If you choose to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.28 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
expressroute How To Routefilter Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-routefilter-powershell.md
To attach route filters with Microsoft 365 services, you must have authorization
- [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) and have the circuit enabled by your connectivity provider before you continue. The ExpressRoute circuit must be in a provisioned and enabled state. - [Create Microsoft peering](expressroute-circuit-peerings.md) if you manage the BGP session directly. Or, have your connectivity provider provision Microsoft peering for your circuit. ### Sign in to your Azure account and select your subscription
expressroute Howto Circuit Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/howto-circuit-cli.md
This quickstart describes how to create an Azure ExpressRoute circuit by using t
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Install the latest version of the CLI commands (2.0 or later). For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli). ## <a name="create"></a>Create and provision an ExpressRoute circuit
expressroute Quickstart Create Expressroute Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-bicep.md
This quickstart describes how to use Bicep to create an ExpressRoute circuit wit
:::image type="content" source="media/expressroute-howto-circuit-portal-resource-manager/environment-diagram.png" alt-text="Diagram of ExpressRoute circuit deployment environment using bicep." lightbox="media/expressroute-howto-circuit-portal-resource-manager/environment-diagram.png"::: ## Prerequisites
expressroute Quickstart Create Expressroute Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM tem
:::image type="content" source="media/expressroute-howto-circuit-portal-resource-manager/environment-diagram.png" alt-text="Diagram of ExpressRoute circuit deployment environment using ARM template. " lightbox="media/expressroute-howto-circuit-portal-resource-manager/environment-diagram.png"::: If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
firewall-manager Create Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/create-policy-powershell.md
In this quickstart, you use Azure PowerShell to create an Azure Firewall policy
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Sign in to Azure
firewall-manager Quick Firewall Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-bicep.md
In this quickstart, you use Bicep to create an Azure Firewall and a firewall pol
Also, IP Groups are used in the rules to define the **Source** IP addresses. For information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
firewall-manager Quick Firewall Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to
Also, IP Groups are used in the rules to define the **Source** IP addresses. For information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
firewall-manager Quick Secure Virtual Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-bicep.md
In this quickstart, you use Bicep to secure your virtual hub using Azure Firewall Manager. The deployed firewall has an application rule that allows connections to `www.microsoft.com` . Two Windows Server 2019 virtual machines are deployed to test the firewall. One jump server is used to connect to the workload server. From the workload server, you can only connect to `www.microsoft.com`. For more information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
firewall-manager Quick Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to secure your virtual hub using Azure Firewall Manager. The deployed firewall has an application rule that allows connections to `www.microsoft.com` . Two Windows Server 2019 virtual machines are deployed to test the firewall. One jump server is used to connect to the workload server. From the workload server, you can only connect to `www.microsoft.com`. For more information about Azure Firewall Manager, see [What is Azure Firewall Manager?](overview.md).
firewall Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-bicep.md
In this quickstart, you use Bicep to deploy an Azure Firewall in three Availability Zones. The Bicep file creates a test network environment with a firewall. The network has one virtual network (VNet) with three subnets: *AzureFirewallSubnet*, *ServersSubnet*, and *JumpboxSubnet*. The *ServersSubnet* and *JumpboxSubnet* subnet each have a single, two-core Windows Server virtual machine.
firewall Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using the [Azure portal](tutorial-firewall-deploy-portal.md) or [Azure PowerShell](deploy-ps.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
az vm create \
--admin-username azureadmin ``` ## Deploy the firewall
firewall Deploy Ps Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-ps-policy.md
For this article, you create a simplified single VNet with three subnets for eas
For more information about Azure Bastion, see [What is Azure Bastion?](../bastion/bastion-overview.md) > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
:::image type="content" source="media/deploy-ps/tutorial-network.png" alt-text="Diagram that shows a firewall network infrastructure." lightbox="media/deploy-ps/tutorial-network.png":::
firewall Deploy Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-ps.md
For this article, you create a simplified single VNet with three subnets for eas
For more information about Azure Bastion, see [What is Azure Bastion?](../bastion/bastion-overview.md) > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
:::image type="content" source="media/deploy-ps/tutorial-network.png" alt-text="Diagram that shows a firewall network infrastructure." lightbox="media/deploy-ps/tutorial-network.png":::
$VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine -PublisherName 'Micros
New-AzVM -ResourceGroupName Test-FW-RG -Location "East US" -VM $VirtualMachine -Verbose ``` ## Deploy the firewall
firewall Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-template.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to deploy an Azure Firewall in three Availability Zones. The template creates a test network environment with a firewall. The network has one virtual network (VNet) with three subnets: *AzureFirewallSubnet*, *ServersSubnet*, and *JumpboxSubnet*. The *ServersSubnet* and *JumpboxSubnet* subnet each have a single, two-core Windows Server virtual machine.
firewall Firewall Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-diagnostics.md
You can monitor Azure Firewall using firewall logs. You can also use activity lo
You can access some of these logs through the portal. Logs can be sent to [Azure Monitor logs](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics), Storage, and Event Hubs and analyzed in Azure Monitor logs or by different tools such as Excel and Power BI. ## Prerequisites
firewall Firewall Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-sftp.md
In this article, you:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. This article requires the latest Azure PowerShell modules. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
firewall Premium Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-deploy.md
You'll use a template to deploy a test environment that has a central VNet (10.0
- a firewall subnet (10.0.100.0/24) > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
A single central VNet is used in this test environment for simplicity. For production purposes, a [hub and spoke topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) with peered VNets is more common.
firewall Quick Create Ipgroup Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-bicep.md
In this quickstart, you use a Bicep file to deploy an Azure Firewall with sample IP Groups used in a network rule and application rule. An IP Group is a top-level resource that allows you to define and group IP addresses, ranges, and subnets into a single object. IP Group is useful for managing IP addresses in Azure Firewall rules. You can either manually enter IP addresses or import them from a file. ## Prerequisites
firewall Quick Create Ipgroup Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-template.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to deploy an Azure Firewall with sample IP Groups used in a network rule and application rule. An IP Group is a top-level resource that allows you to define and group IP addresses, ranges, and subnets into a single object. This is useful for managing IP addresses in Azure Firewall rules. You can either manually enter IP addresses or import them from a file. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
firewall Quick Create Multiple Ip Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-bicep.md
In this quickstart, you use a Bicep file to deploy an Azure Firewall with multip
:::image type="content" source="media/quick-create-multiple-ip-bicep/azure-firewall-multiple-ip.png" alt-text="Diagram showing the network configuration for this quickstart." lightbox="media/quick-create-multiple-ip-bicep/azure-firewall-multiple-ip.png"::: For more information about Azure Firewall with multiple public IP addresses, see [Deploy an Azure Firewall with multiple public IP addresses using Azure PowerShell](deploy-multi-public-ip-powershell.md).
firewall Quick Create Multiple Ip Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-template.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to
:::image type="content" source="media/quick-create-multiple-ip-bicep/azure-firewall-multiple-ip.png" alt-text="Diagram showing the network configuration for this quickstart." lightbox="media/quick-create-multiple-ip-bicep/azure-firewall-multiple-ip.png"::: For more information about Azure Firewall with multiple public IP addresses, see [Deploy an Azure Firewall with multiple public IP addresses using Azure PowerShell](deploy-multi-public-ip-powershell.md).
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
Now create the workload virtual machine, and place it in the **Workload-SN** sub
1. Review the settings on the summary page, and then select **Create**. 1. After the deployment is complete, select **Go to resource** and note the **Srv-Work** private IP address that you'll need to use later. ## Examine the firewall
firewall Tutorial Firewall Dnat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-dnat.md
Review the summary, and then select **Create**. This takes a few minutes to comp
After deployment finishes, note the private IP address for the virtual machine. It is used later when you configure the firewall. Select the virtual machine name. Select **Overview**, and under **Networking** note the private IP address. ## Deploy the firewall
firewall Tutorial Hybrid Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-portal-policy.md
This is a virtual machine that you use to connect using Remote Desktop to the pu
10. For **Boot diagnostics**, Select **Disable**. 10. Select **Review+Create**, review the settings on the summary page, and then select **Create**. ## Test the firewall
firewall Tutorial Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-portal.md
Create a virtual machine that you use to connect via remote access to the public
1. For **Boot diagnostics**, select **Disable**. 1. Select **Review+Create**, review the settings on the summary page, and then select **Create**. ## Test the firewall
firewall Tutorial Hybrid Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-hybrid-ps.md
For this article, you create three virtual networks:
If you want to use the Azure portal instead to complete the procedures in this article, see [Deploy and configure Azure Firewall in a hybrid network by using the Azure portal](tutorial-hybrid-portal.md). ## Prerequisites
New-AzVm `
-Size "Standard_DS2" ``` ## Test the firewall
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
This quickstart describes how to use Bicep to create an Azure Front Door Standar
[!INCLUDE [ddos-waf-recommendation](../../includes/ddos-waf-recommendation.md)] ## Prerequisites
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
In this quickstart, you learn how to create an Azure Front Door Standard/Premium
[!INCLUDE [ddos-waf-recommendation](../../includes/ddos-waf-recommendation.md)] [!INCLUDE [azure-cli-prepare-your-environment](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
frontdoor Create Front Door Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-powershell.md
In this quickstart, you'll learn how to create an Azure Front Door Standard/Prem
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell ## Create resource group
frontdoor Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create an Azure Front Door Standard/Premium with a Web App as origin. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
In this tutorial, you learn how to:
> - Disable the HTTPS protocol on your custom domain ## Prerequisites
frontdoor Front Door Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain.md
This article shows how to add a custom domain to your Front Door. When you use A
After you create a Front Door profile, the default frontend host is a subdomain of `azurefd.net`. This name is included in the URL for delivering Front Door content to your backend by default. For example, `https://contoso-frontend.azurefd.net`. For your convenience, Azure Front Door provides the option to associate a custom domain to the endpoint. With this capability, you can deliver your content with your URL instead of the Front Door default domain name such as, `https://www.contoso.com/photo.png`. > [!NOTE] > Front Door does **not** support custom domains with [punycode](https://en.wikipedia.org/wiki/Punycode) characters.
frontdoor Front Door Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-waf.md
In this tutorial, you learn how to:
> - Associate a WAF policy with Front Door. > - Configure a custom domain. ## Prerequisites
frontdoor Quickstart Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-bicep.md
This quickstart describes how to use Bicep to create a Front Door to set up high availability for a web endpoint. ## Prerequisites
frontdoor Quickstart Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-cli.md
The Front Door directs web traffic to specific resources in a backend pool. You
az extension add --name front-door ``` If you choose to install and use the CLI locally, this quickstart requires Azure CLI version 2.0.28 or later. To find the version, run `az --version`. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
frontdoor Quickstart Create Front Door Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-powershell.md
The Front Door directs web traffic to specific resources in a backend pool. You
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell ## Create resource group
frontdoor Quickstart Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create a Front Door to set up high availability for a web endpoint. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
frontdoor Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/scripts/custom-domain.md
This Azure CLI script example deploys a custom domain name and TLS certificate o
> [!IMPORTANT] > This script requires that an Azure DNS public zone already exists for domain name. For a tutorial, see [Host your domain in Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Getting started
AZURE_DNS_ZONE_NAME=www.contoso.com AZURE_DNS_ZONE_RESOURCE_GROUP=contoso-rg ./d
## Clean up resources ```azurecli az group delete --name $resourceGroup
frontdoor Front Door Add Rules Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/front-door-add-rules-cli.md
In this tutorial, you'll learn how to:
> - Create a rule and add it to the rule set. > - Add actions or conditions to your rules. [!INCLUDE [azure-cli-prepare-your-environment](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
frontdoor How To Cache Purge Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge-powershell.md
Best practice is to make sure your users always obtain the latest copy of your a
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell * Review [Caching with Azure Front Door](../front-door-caching.md) to understand how caching works. * Have a functioning Azure Front Door profile. Refer [Create a Front Door - PowerShell](../create-front-door-powershell.md)to learn how to create one.
governance Create Blueprint Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-azurecli.md
In this tutorial, you learn to use Azure Blueprints to do some of the common tas
- If you've not used Azure Blueprints before, register the resource provider through the Azure CLI with `az provider register --namespace Microsoft.Blueprint`. ## Add the blueprint extension
governance Create Blueprint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-powershell.md
In this tutorial, you learn to use Azure Blueprints to do some of the common tas
- If you've not used Azure Blueprints before, register the resource provider through Azure PowerShell with `Register-AzResourceProvider -ProviderNamespace Microsoft.Blueprint`. ## Create a blueprint
governance Create Blueprint Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/create-blueprint-rest-api.md
In this tutorial, you learn to use Azure Blueprints to do some of the common tas
- Register the `Microsoft.Blueprint` resource provider. For directions, see [Resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). ## Get started with REST API
governance Create Management Group Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-azure-cli.md
directory. You receive a notification when the process is complete. For more inf
start using management groups, we allow the creation of the initial management groups at the root level. ### Create in the Azure CLI
governance Create Management Group Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-dotnet.md
directory. You receive a notification when the process is complete. For more inf
start using management groups, we allow the creation of the initial management groups at the root level. ## Application setup
governance Create Management Group Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-go.md
directory. You receive a notification when the process is complete. For more inf
start using management groups, we allow the creation of the initial management groups at the root level. ## Add the management group package
governance Create Management Group Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-javascript.md
directory. You receive a notification when the process is complete. For more inf
start using management groups, we allow the creation of the initial management groups at the root level. ## Application setup
governance Create Management Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-powershell.md
directory. You receive a notification when the process is complete. For more inf
start using management groups, we allow the creation of the initial management groups at the root level. ### Create in Azure PowerShell
governance Create Management Group Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-python.md
directory. You receive a notification when the process is complete. For more inf
start using management groups, we allow the creation of the initial management groups at the root level. ## Add the Resource Graph library
governance Create Management Group Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-rest-api.md
directory. You receive a notification when the process is complete. For more inf
start using management groups, we allow the creation of the initial management groups at the root level. ### Create in REST API
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/manage.md
Management groups give you enterprise-grade management at a large scale no matte
subscriptions you might have. To learn more about management groups, see [Organize your resources with Azure management groups](./overview.md). > [!IMPORTANT] > Azure Resource Manager user tokens and management group cache lasts for 30 minutes before they are
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md
In this quickstart, you use a Bicep file to create a policy assignment that validates resource's compliance with an Azure policy. The policy is assigned to a resource group and audits virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines. ## Prerequisites
governance Assign Policy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-template.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to create a policy assignment that validates resource's compliance with an Azure policy. The policy is assigned to a resource group and audits virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
governance Route State Change Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/route-state-change-events.md
send the events to a web app that collects and displays the messages.
`az --version`. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). ## Create a resource group
governance First Query Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-azurecli.md
Title: "Quickstart: Run Resource Graph query using Azure CLI"
-description: In this quickstart, you run an Azure Resource Graph query using the extension for Azure CLI.
Previously updated : 04/22/2024
+description: In this quickstart, you run a Resource Graph query using Azure CLI and the resource-graph extension.
Last updated : 06/26/2024 # Quickstart: Run Resource Graph query using Azure CLI
-This quickstart describes how to run an Azure Resource Graph query using the extension for Azure CLI. The article also shows how to order (sort) and limit the query's results. You can run a query for resources in your tenant, management groups, or subscriptions. When you're finished, you can remove the extension.
+This quickstart describes how to run an Azure Resource Graph query using the Azure CLI and the Resource Graph extension. The article also shows how to order (sort) and limit the query's results. You can run a query for resources in your tenant, management groups, or subscriptions. When you finish, you can remove the extension.
## Prerequisites - If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - [Azure CLI](/cli/azure/install-azure-cli) must be version 2.22.0 or higher for the Resource Graph extension.-- [Visual Studio Code](https://code.visualstudio.com/).
+- A Bash shell environment where you can run Azure CLI commands. For example, Git Bash in a [Visual Studio Code](https://code.visualstudio.com/) terminal session.
## Connect to Azure
az account set --subscription <subscriptionID>
## Install the extension
-To enable Azure CLI to query resources using Azure Resource Graph, the Resource Graph extension must be installed. You can manually install the extension with the following steps. Otherwise, the first time you run a query with `az graph` you're prompted to install the extension.
+To enable Azure CLI to query resources using Azure Resource Graph, the Resource Graph extension must be installed. The first time you run a query with `az graph` a prompt is displayed to install the extension. Otherwise, use the following steps to do a manual installation.
1. List the available extensions and versions:
governance First Query Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-rest-api.md
Resource Graph query.
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. ## Getting started with REST API
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
In this documentation, you review each feature in detail.
> and Azure Policy's [Change history](../policy/how-to/determine-non-compliance.md#change-history-preview) > _visual diff_. It's designed to help customers manage large-scale environments. ## How Resource Graph complements Azure Resource Manager
governance Shared Query Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-cli.md
Title: "Quickstart: Create a shared query with Azure CLI"
-description: In this quickstart, you follow the steps to enable the Resource Graph extension for Azure CLI and create a shared query.
Previously updated : 08/17/2021
+ Title: "Quickstart: Create Resource Graph shared query using Azure CLI"
+description: In this quickstart, you create an Azure Resource Graph shared query using Azure CLI and the resource-graph extension.
Last updated : 06/26/2024
-# Quickstart: Create a Resource Graph shared query using Azure CLI
-The first step to using Azure Resource Graph with [Azure CLI](/cli/azure/) is to check that the
-extension is installed. This quickstart walks you through the process of adding the extension to
-your Azure CLI installation. You can use the extension with Azure CLI installed locally or through
-the [Azure Cloud Shell](https://shell.azure.com).
+# Quickstart: Create Resource Graph shared query using Azure CLI
-At the end of this process, you'll have added the extension to your Azure CLI installation of choice
-and create a Resource Graph shared query.
+This quickstart describes how to create an Azure Resource Graph shared query with Azure CLI and the Resource Graph extension. The [az graph shared-query](/cli/azure/graph/shared-query) commands are an _experimental_ feature of [az graph query](/cli/azure/graph#az-graph-query).
+
+A shared query can be run from Azure CLI with the _experimental_ feature's commands, or you can run the shared query from the Azure portal. A shared query is an Azure Resource Manager object that you can grant permission to or run in Azure Resource Graph Explorer. When you finish, you can remove the Resource Graph extension.
## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli) must be version 2.22.0 or higher for the Resource Graph extension.
+- A Bash shell environment where you can run Azure CLI commands. For example, Git Bash in a [Visual Studio Code](https://code.visualstudio.com/) terminal session.
+
+## Connect to Azure
-<!-- [!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)] -->
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
-## Add the Resource Graph extension
+```azurecli
+az login
+
+# Run these commands if you have multiple subscriptions
+az account list --output table
+az account set --subscription <subscriptionID>
+```
-To enable Azure CLI to work with Azure Resource Graph, the extension must be added. This extension
-works wherever Azure CLI can be used, including [bash on Windows 10](/windows/wsl/install-win10),
-[Cloud Shell](https://shell.azure.com) (both standalone and inside the portal), the [Azure CLI
-Docker image](https://hub.docker.com/_/microsoft-azure-cli), or locally installed.
+## Install the extension
-1. Check that the latest Azure CLI is installed (at least **2.8.0**). If it isn't yet installed,
- follow [these instructions](/cli/azure/install-azure-cli-windows).
+To enable Azure CLI to query resources using Azure Resource Graph, the Resource Graph extension must be installed. The first time you run a query with `az graph` a prompt is displayed to install the extension. Otherwise, use the following steps to do a manual installation.
-1. In your Azure CLI environment of choice, use
- [az extension add](/cli/azure/extension#az-extension-add) to import the Resource Graph extension
- with the following command:
+1. List the available extensions and versions:
```azurecli
- # Add the Resource Graph extension to the Azure CLI environment
- az extension add --name resource-graph
+ az extension list-available --output table
```
-1. Validate that the extension has been installed and is the expected version (at least **1.1.0**)
- with [az extension list](/cli/azure/extension#az-extension-list):
+1. Install the extension:
```azurecli
- # Check the extension list (note that you may have other extensions installed)
- az extension list
-
- # Run help for graph query options
- az graph query -h
+ az extension add --name resource-graph
```
-## Create a Resource Graph shared query
+1. Verify the extension was installed:
-With the Azure CLI extension added to your environment of choice, it's time to a Resource Graph
-shared query. The shared query is an Azure Resource Manager object that you can grant permission to
-or run in Azure Resource Graph Explorer. The query summarizes the count of all resources grouped by
-_location_.
+ ```azurecli
+ az extension list --output table
+ ```
-1. Create a resource group with [az group create](/cli/azure/group#az-group-create) to store the
- Azure Resource Graph shared query. This resource group is named `resource-graph-queries` and the
- location is `westus2`.
+1. Display the extension's syntax:
```azurecli
- # Login first with az login if not using Cloud Shell
-
- # Create the resource group
- az group create --name 'resource-graph-queries' --location 'westus2'
+ az graph query --help
```
-1. Create the Azure Resource Graph shared query using the `graph` extension and
- [az graph shared-query create](/cli/azure/graph/shared-query#az-graph-shared-query-create)
- command:
+ For more information about Azure CLI extensions, go to [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+
+## Create a shared query
+
+Create a resource group and a shared that summarizes the count of all resources grouped by location.
+
+1. Create a resource group to store the Azure Resource Graph shared query.
```azurecli
- # Create the Azure Resource Graph shared query
- az graph shared-query create --name 'Summarize resources by location' \
- --description 'This shared query summarizes resources by location for a pinnable map graphic.' \
- --graph-query 'Resources | summarize count() by location' \
- --resource-group 'resource-graph-queries'
+ az group create --name "demoSharedQuery" --location westus2
```
-1. List the shared queries in the new resource group. The
- [az graph shared-query list](/cli/azure/graph/shared-query#az-graph-shared-query-list)
- command returns an array of values.
+1. Create the shared query.
```azurecli
- # List all the Azure Resource Graph shared queries in a resource group
- az graph shared-query list --resource-group 'resource-graph-queries'
+ az graph shared-query create --name "Summarize resources by location" \
+ --description "This shared query summarizes resources by location for a pinnable map graphic." \
+ --graph-query "Resources | summarize count() by location" \
+ --resource-group demoSharedQuery
```
-1. To get just a single shared query result, use the
- [az graph shared-query show](/cli/azure/graph/shared-query#az-graph-shared-query-show)
- command.
+1. List all shared queries in the resource group.
```azurecli
- # Show a specific Azure Resource Graph shared query
- az graph shared-query show --resource-group 'resource-graph-queries' \
- --name 'Summarize resources by location'
+ az graph shared-query list --resource-group demoSharedQuery
```
-1. Run the shared query in Azure CLI with the `{{shared-query-uri}}` syntax in an
- [az graph query](/cli/azure/graph#az-graph-query) command.
- First, copy the `id` field from the result of the previous `show` command. Replace
- `shared-query-uri` text in the example with the value from the `id` field, but leave the
- surrounding `{{` and `}}` characters.
+1. Limit the results to a specific shared query.
```azurecli
- # Run a Azure Resource Graph shared query
- az graph query --graph-query "{{shared-query-uri}}"
+ az graph shared-query show --resource-group "demoSharedQuery" \
+ --name "Summarize resources by location"
```
- > [!NOTE]
- > The `{{shared-query-uri}}` syntax is a **Preview** feature.
+## Run the shared query
+
+You can use the Azure CLI experimental feature syntax or the Azure portal to run the shared query.
+
+### Use experimental feature to run shared query
+
+Run the shared query in Azure CLI with the `{{shared-query-uri}}` syntax in an `az graph query` command. You get the resource ID of your shared query and store it in a variable. The variable is used when you run the shared query.
-Another way to find Resource Graph shared queries is through the Azure portal. In the portal, use
-the search bar to search for "Resource Graph queries". Select the shared query. On the **Overview**
-page, the **Query** tab displays the saved query. The **Edit** button opens it in
-[Resource Graph Explorer](./first-query-portal.md).
+```azurecli
+sharedqueryid=$(az graph shared-query show --resource-group "demoSharedQuery" \
+ --name "Summarize resources by location" \
+ --query id \
+ --output tsv)
+
+az graph query --graph-query "{{$sharedqueryid}}"
+```
+
+You can use the `subscriptions` parameter to limit the results.
+
+```azurecli
+az graph query --graph-query "{{$sharedqueryid}}" --subscriptions 11111111-1111-1111-1111-111111111111
+```
+
+### Run the shared query from portal
+
+You can verify the shared query works using Azure Resource Graph Explorer. To change the scope, use the **Scope** menu on the left side of the page.
+
+1. Sign in to [Azure portal](https://portal.azure.com).
+1. Enter _resource graph_ into the search field at the top of the page.
+1. Select **Resource Graph Explorer**.
+1. Select **Open query**.
+1. Change **Type** to _Shared queries_.
+1. Select the query _Count VMs by OS_.
+1. Select **Run query** and the view output in the **Results** tab.
+
+You can also run the query from your resource group.
+
+1. In Azure, go to the resource group, _demoSharedQuery_.
+1. From the **Overview** tab, select the query _Count VMs by OS_.
+1. Select the **Results** tab.
## Clean up resources
-If you wish to remove the Resource Graph shared query, resource group, and extension from your Azure
-CLI environment, you can do so by using the following commands:
+To remove the resource group and shared query:
+
+```azurecli
+az group delete --name demoSharedQuery
+```
-- [az graph shared-query delete](/cli/azure/graph/shared-query#az-graph-shared-query-delete)-- [az group delete](/cli/azure/group#az-group-delete)-- [az extension remove](/cli/azure/extension#az-extension-remove)
+To remove the Resource Graph extension, run the following command:
```azurecli
-# Delete the Azure Resource Graph shared query
-az graph shared-query delete --resource-group 'resource-graph-queries' \
- --name 'Summarize resources by location'
+az extension remove --name resource-graph
+```
-# Remove the resource group
-# WARNING: This command deletes ALL resources you've added to this resource group without prompting for confirmation
-az group delete --resource-group 'resource-graph-queries' --yes
+To sign out of your Azure CLI session:
-# Remove the Azure Resource Graph extension from the Azure CLI environment
-az extension remove -n resource-graph
+```azurecli
+az logout
``` ## Next steps
-In this quickstart, you've added the Resource Graph extension to your Azure CLI environment and
+In this quickstart, you added the Resource Graph extension to your Azure CLI environment and
created a shared query. To learn more about the Resource Graph language, continue to the query language details page. > [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
+> [Understanding the Azure Resource Graph query language](./concepts/query-language.md)
governance Shared Query Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-azure-powershell.md
This article describes how you can create an Azure Resource Graph shared query u
- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. > [!IMPORTANT] > While the **Az.ResourceGraph** PowerShell module is in preview, you must install it separately
governance Shared Query Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-bicep.md
Title: 'Quickstart: Create a shared query with Bicep'
-description: In this quickstart, you use Bicep to create a Resource Graph shared query that counts virtual machines by OS.
-- Previously updated : 05/17/2022
+ Title: "Quickstart: Create Resource Graph shared query using Bicep"
+description: In this quickstart, you use Bicep to create an Azure Resource Graph shared query that counts virtual machines by OS.
Last updated : 06/26/2024
-# Quickstart: Create a shared query using Bicep
-[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service designed to extend Azure Resource Management by providing efficient and performant resource exploration with the ability to query at scale across a given set of subscriptions so you can effectively govern your environment. With Resource Graph queries, you can:
+# Quickstart: Create Resource Graph shared query using Bicep
-- Query resources with complex filtering, grouping, and sorting by resource properties.-- Explore resources iteratively based on governance requirements.-- Assess the impact of applying policies in a vast cloud environment.-- [Query changes made to resource properties](./how-to/get-resource-changes.md) (preview).
+In this quickstart, you use Bicep to create an Azure Resource Graph shared query. Resource Graph queries can be saved as a _private query_ or a _shared query_. A private query is saved to the individual's Azure portal profile and isn't visible to others. A shared query is a Resource Manager object that can be shared with others through permissions and role-based access. A shared query provides common and consistent execution of resource discovery.
-Resource Graph queries can be saved as a _private query_ or a _shared query_. A private query is saved to the individual's Azure portal profile and isn't visible to others. A shared query is a Resource Manager object that can be shared with others through permissions and role-based access. A shared query provides common and consistent execution of resource discovery. This quickstart uses Bicep to create a shared query.
- ## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli) or [PowerShell](/powershell/scripting/install/installing-powershell) and [Azure PowerShell](/powershell/azure/install-azure-powershell).
+- [Visual Studio Code](https://code.visualstudio.com/).
-## Review the Bicep file
+## Connect to Azure
-In this quickstart, you create a shared query called _Count VMs by OS_. To try this query in SDK or in portal with Resource Graph Explorer, see [Samples - Count virtual machines by OS type](./samples/starter.md#count-virtual-machines-by-os-type).
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
-The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/resourcegraph-sharedquery-countos/).
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az login
-The resource defined in the Bicep file is:
+# Run these commands if you have multiple subscriptions
+az account list --output table
+az account set --subscription <subscriptionID>
+```
-- [Microsoft.ResourceGraph/queries](/azure/templates/microsoft.resourcegraph/queries)
+# [Azure PowerShell](#tab/azure-powershell)
-## Deploy the Bicep file
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
-1. Save the Bicep file as **main.bicep** to your local computer.
+```azurepowershell
+Connect-AzAccount
- > [!NOTE]
- > The Bicep file isn't required to be named **main.bicep**. If you save the file with a different name, you must change the name of
- > the template file in the deployment step below.
+# Run these commands if you have multiple subscriptions
+Get-AzSubScription
+Set-AzContext -Subscription <subscriptionID>
+```
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
- # [CLI](#tab/CLI)
+## Review the Bicep file
- ```azurecli
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep
- ```
+In this quickstart, you create a shared query called _Count VMs by OS_. To try this query in SDK or in portal with Resource Graph Explorer, see [Samples - Count virtual machines by OS type](/previous-versions/azure/governance/resource-graph/samples/starter#count-virtual-machines-by-os-type).
- # [PowerShell](#tab/PowerShell)
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/resourcegraph-sharedquery-countos/).
+
+1. Open Visual Studio Code and create a new file.
+1. Copy and paste the Bicep file into your new file.
+1. Save the file as _main.bicep_ on your local computer.
+
- ```azurepowershell
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
- ```
+The resource defined in the Bicep file is: [Microsoft.ResourceGraph/queries](/azure/templates/microsoft.resourcegraph/queries). To learn how to create Bicep files, go to [Quickstart: Create Bicep files with Visual Studio Code](../../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md).
-
+## Deploy the Bicep file
+
+Create a resource group and deploy the Bicep file with Azure CLI or Azure PowerShell. Make sure you're in the directory where you saved the Bicep file. Otherwise, you need to specify the path to the file.
- When the deployment finishes, you should see a message indicating the deployment succeeded.
+# [Azure CLI](#tab/azure-cli)
-Some other resources:
+```azurecli
+az group create --name exampleRG --location eastus
+az deployment group create --resource-group exampleRG --template-file main.bicep
+```
-- To see the template reference, go to [Azure template reference](/azure/templates/microsoft.resourcegraph/allversions).-- To learn how to create Bicep files, see [Quickstart: Create Bicep files with Visual Studio Code](../../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md).
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup -Name exampleRG -Location eastus
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile main.bicep
+```
+++
+The deployment outputs messages to your shell. When the deployment is finished, your shell returns to a command prompt.
## Review deployed resources
-Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+Use Azure CLI or Azure PowerShell to list the deployed resources in the resource group.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az resource list --resource-group exampleRG ```
-# [PowerShell](#tab/PowerShell)
+# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
-Get-AzResource -ResourceGroupName exampleRG
+```azurepowershell
+Get-AzResource -ResourceGroupName exampleRG
```
+The output shows the shared query's name, resource group name, and resource ID.
+
+## Run the shared query
+
+You can verify the shared query works using Azure Resource Graph Explorer. To change the scope, use the **Scope** menu on the left side of the page.
+
+1. Sign in to [Azure portal](https://portal.azure.com).
+1. Enter _resource graph_ into the search field at the top of the page.
+1. Select **Resource Graph Explorer**.
+1. Select **Open query**.
+1. Change **Type** to _Shared queries_.
+1. Select the query _Count VMs by OS_.
+1. Select **Run query** and the view output in the **Results** tab.
+
+You can also run the query from your resource group.
+
+1. In Azure, go to the resource group, _exampleRG_.
+1. From the **Overview** tab, select the query _Count VMs by OS_.
+1. Select the **Results** tab.
+ ## Clean up resources
-When you no longer need the resource that you created, delete the resource group using Azure CLI or Azure PowerShell.
+When you no longer need the resource that you created, delete the resource group using Azure CLI or Azure PowerShell. And if you signed into Azure portal to run the query, be sure to sign out.
-# [CLI](#tab/CLI)
+# [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
az group delete --name exampleRG ```
-# [PowerShell](#tab/PowerShell)
+To sign out of your Azure CLI session:
+
+```azurecli
+az logout
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
+```azurepowershell
Remove-AzResourceGroup -Name exampleRG ```
+To sign out of your Azure PowerShell session:
+
+```azurepowershell
+Disconnect-AzAccount
+```
+ ## Next steps
In this quickstart, you created a Resource Graph shared query using Bicep.
To learn more about shared queries, continue to the tutorial for: > [!div class="nextstepaction"]
-> [Manage queries in Azure portal](./tutorials/create-share-query.md)
+> [Tutorial: Create and share an Azure Resource Graph query in the Azure portal](./tutorials/create-share-query.md)
governance Shared Query Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-template.md
Title: 'Quickstart: Create a shared query with ARM template'
+ Title: 'Quickstart: Create Resource Graph shared query using ARM template'
description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a Resource Graph shared query that counts virtual machines by OS. Previously updated : 06/21/2024 Last updated : 06/26/2024
-# Quickstart: Create a shared query by using an ARM template
+# Quickstart: Create Resource Graph shared query using ARM template
-Resource Graph queries can be saved as a _private query_ or a _shared query_. A private query is saved to the individuals portal profile and isn't visible to others. A shared query is a Resource Manager object that can be shared with others through permissions and role-based access. A shared query provides common and consistent execution of resource discovery. This quickstart uses an Azure Resource Manager template (ARM template) to create a shared query.
+In this quickstart, you use an Azure Resource Manager template (ARM template) to create a Resource Graph shared query. Resource Graph queries can be saved as a _private query_ or a _shared query_. A private query is saved to the individuals portal profile and isn't visible to others. A shared query is a Resource Manager object that can be shared with others through permissions and role-based access. A shared query provides common and consistent execution of resource discovery.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
hdinsight Apache Hadoop Linux Tutorial Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started-bicep.md
Last updated 12/05/2023
In this quickstart, you use Bicep to create an [Apache Hadoop](./apache-hadoop-introduction.md) cluster in Azure HDInsight. Hadoop was the original open-source framework for distributed processing and analysis of big data sets on clusters. The Hadoop ecosystem includes related software and utilities, including Apache Hive, Apache HBase, Spark, Kafka, and many others. Currently HDInsight comes with [seven different cluster types](../hdinsight-overview.md#cluster-types-in-hdinsight). Each cluster type supports a different set of components. All cluster types support Hive. For a list of supported components in HDInsight, see [What's new in the Hadoop cluster versions provided by HDInsight?](../hdinsight-component-versioning.md)
hdinsight Apache Hadoop Linux Tutorial Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started.md
Last updated 09/15/2023
In this quickstart, you use an Azure Resource Manager template (ARM template) to create an [Apache Hadoop](./apache-hadoop-introduction.md) cluster in Azure HDInsight. Hadoop was the original open-source framework for distributed processing and analysis of big data sets on clusters. The Hadoop ecosystem includes related software and utilities, including Apache Hive, Apache HBase, Spark, Kafka, and many others. Currently HDInsight comes with [seven different cluster types](../hdinsight-overview.md#cluster-types-in-hdinsight). Each cluster type supports a different set of components. All cluster types support Hive. For a list of supported components in HDInsight, see [What's new in the Hadoop cluster versions provided by HDInsight?](../hdinsight-component-versioning.md)
hdinsight Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/quickstart-bicep.md
Last updated 06/15/2024
In this quickstart, you use Bicep to create an [Apache HBase](./apache-hbase-overview.md) cluster in Azure HDInsight. HBase is an open-source, NoSQL database that is built on Apache Hadoop and modeled after [Google BigTable](https://cloud.google.com/bigtable/). ## Prerequisites
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/quickstart-resource-manager-template.md
Last updated 01/04/2024
In this quickstart, you use an Azure Resource Manager template (ARM template) to create an [Apache HBase](./apache-hbase-overview.md) cluster in Azure HDInsight. HBase is an open-source, NoSQL database that is built on Apache Hadoop and modeled after [Google BigTable](https://cloud.google.com/bigtable/). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
hdinsight Hdinsight Administer Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites The PowerShell [Az Module](/powershell/azure/) installed.
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-cli.md
The steps in this document walk-through creating a HDInsight 4.0 cluster using t
[!INCLUDE [delete-cluster-warning](includes/hdinsight-delete-cluster-warning.md)] [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites [Azure PowerShell](/powershell/azure/install-azure-powershell) Az module.
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
Learn how to enable Azure Monitor logs to monitor Hadoop cluster operations in H
[Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) is an Azure Monitor service that monitors your cloud and on-premises environments. The monitoring is to maintain their availability and performance. It collects data generated by resources in your cloud, on-premises environments and from other monitoring tools. The data is used to provide analysis across multiple sources. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
hdinsight Hdinsight Hadoop Oms Log Analytics Use Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-use-queries.md
Learn some basic scenarios on how to use Azure Monitor logs to monitor Azure HDI
* [Analyze HDInsight cluster metrics](#analyze-hdinsight-cluster-metrics) * [Create event alerts](#create-alerts-for-tracking-events) ## Prerequisites
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen1.md
In this article, you learn how Data Lake Storage Gen1 works with HDInsight clust
> [!NOTE] > Data Lake Storage Gen1 is always accessed through a secure channel, so there is no `adls` filesystem scheme name. You always use `adl`. ## Availability for HDInsight clusters
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-azure-cli.md
To create an HDInsight cluster that uses Data Lake Storage Gen2 for storage, fol
- Use the embedded Azure Cloud Shell via the "Try It" button, located in the top-right corner of each code block. - [Install the latest version of the Azure CLI](/cli/azure/install-azure-cli) (2.0.13 or later) if you prefer to use a local CLI console. Sign in to Azure using `az login`, using an account that is associated with the Azure subscription under which you would like to deploy the user-assigned managed identity.Azure CLI. [!INCLUDE [delete-cluster-warning](includes/hdinsight-delete-cluster-warning.md)]
hdinsight Hdinsight Sdk Dotnet Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-dotnet-samples.md
This article provides:
You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services. ## Prerequisite
hdinsight Hdinsight Sdk Java Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-java-samples.md
This article provides:
* Links to samples for cluster creation tasks. * Links to reference content for other management tasks. ## Prerequisites
hdinsight Hdinsight Sdk Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-python-samples.md
This article provides:
* Links to samples for cluster creation tasks. * Links to reference content for other management tasks. ## Prerequisites
hdinsight Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-bicep.md
Last updated 10/16/2023
In this quickstart, you use a Bicep to create an [Interactive Query](./apache-interactive-query-get-started.md) cluster in Azure HDInsight. Interactive Query (also called Apache Hive LLAP, or [Low Latency Analytical Processing](https://cwiki.apache.org/confluence/display/Hive/LLAP)) is an Azure HDInsight [cluster type](../hdinsight-hadoop-provision-linux-clusters.md#cluster-type). ## Prerequisites
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-resource-manager-template.md
Last updated 01/04/2024
In this quickstart, you use an Azure Resource Manager template (ARM template) to create an [Interactive Query](./apache-interactive-query-get-started.md) cluster in Azure HDInsight. Interactive Query (also called Apache Hive LLAP, or [Low Latency Analytical Processing](https://cwiki.apache.org/confluence/display/Hive/LLAP)) is an Azure HDInsight [cluster type](../hdinsight-hadoop-provision-linux-clusters.md#cluster-type). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
hdinsight Apache Kafka Connect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connect-vpn-gateway.md
Learn how to directly connect to Apache Kafka on HDInsight through an Azure Virt
* From resources in an on-premises network. This connection is established by using a VPN device (software or hardware) on your local network. * From a development environment using a VPN software client. ## Architecture and planning
hdinsight Apache Kafka Log Analytics Operations Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-log-analytics-operations-management.md
Last updated 06/15/2024
Learn how to use Azure Monitor logs to analyze logs generated by Apache Kafka on HDInsight. ## Logs location
hdinsight Apache Kafka Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-bicep.md
Last updated 09/15/2023
In this quickstart, you use a Bicep to create an [Apache Kafka](./apache-kafka-introduction.md) cluster in Azure HDInsight. Kafka is an open-source, distributed streaming platform. It's often used as a message broker, as it provides functionality similar to a publish-subscribe message queue. The Kafka API can only be accessed by resources inside the same virtual network. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
hdinsight Apache Kafka Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites * The PowerShell [Az Module](/powershell/azure/) installed.
hdinsight Apache Kafka Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-resource-manager-template.md
Last updated 09/15/2023
In this quickstart, you use an Azure Resource Manager template (ARM template) to create an [Apache Kafka](./apache-kafka-introduction.md) cluster in Azure HDInsight. Kafka is an open-source, distributed streaming platform. It's often used as a message broker, as it provides functionality similar to a publish-subscribe message queue. The Kafka API can only be accessed by resources inside the same virtual network. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
hdinsight Apache Spark Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-create-cluster-cli.md
In this quickstart, you learn how to create an Apache Spark cluster in Azure HDI
If you're using multiple clusters together, you can create a virtual network, and if you're using a Spark cluster you can use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
hdinsight Apache Spark Jupyter Spark Sql Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql-use-powershell.md
Creating an HDInsight cluster includes creating the following Azure objects and
You use a PowerShell script to create the resources. When you run the PowerShell script, you are prompted to enter the following values:
hdinsight Apache Spark Jupyter Spark Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to
If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
hdinsight Apache Spark Jupyter Spark Use Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-use-bicep.md
In this quickstart, you use Bicep to create an [Apache Spark](./apache-spark-ove
If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](../hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](../interactive-query/apache-hive-warehouse-connector.md). ## Prerequisites
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
Last updated 09/27/2023
In this quickstart, you'll learn how to use an Azure Resource Manager template (ARM template) to deploy Azure API for Fast Healthcare Interoperability Resources (FHIR®). You can deploy Azure API for FHIR through the Azure portal, PowerShell, or CLI. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal once you sign in.
healthcare-apis Fhir Paas Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md
In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using the Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
healthcare-apis Fhir Paas Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-powershell-quickstart.md
In this quickstart, you'll learn how to deploy Azure API for FHIR using PowerShe
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Register the Azure API for FHIR resource provider
healthcare-apis Configure Azure Rbac Using Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac-using-scripts.md
Title: Grant permissions to users and client applications using CLI and REST API - Azure Health Data Services
-description: This article describes how to grant permissions to users and client applications using CLI and REST API.
+ Title: Grant permissions to users and applications by using CLI and REST API in Azure Health Data Services
+description: Learn to configure Azure RBAC roles using CLI and REST API for secure access to Azure Health Data Services. See how to make role assignments with detailed scripts and examples.
Last updated 06/06/2022
-# Configure Azure RBAC role using Azure CLI and REST API
+# Configure Azure RBAC roles by using Azure CLI and REST API
-In this article, you'll learn how to grant permissions to client applications (and users) to access Azure Health Data Services using Azure Command-Line Interface (CLI) and REST API. This step is referred to as "role assignment" or Azure
-[role-based access control (Azure RBAC role)](./../role-based-access-control/role-assignments-cli.md). To further your understanding about the application roles defined for Azure Health Data Services, see [Configure Azure RBAC role](configure-azure-rbac.md).
+In this article, you learn how to grant permissions to client applications and users to access Azure Health Data Services by using the Azure Command-Line Interface (CLI) and REST API. This step is referred to as role assignment or Azure
+[role-based access control (RBAC)](./../role-based-access-control/role-assignments-cli.md). For more information, see [Configure Azure RBAC role](configure-azure-rbac.md).
-You can view and download the [CLI scripts](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/scripts/role-assignment-using-cli.http) and [REST API scripts](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/scripts/role-assignment-using-rest-api.http) from [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples).
+View and download the [CLI scripts](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/scripts/role-assignment-using-cli.http) and [REST API scripts](https://github.com/microsoft/healthcare-apis-samples/blob/main/src/scripts/role-assignment-using-rest-api.http) from [Azure Health Data Services samples](https://github.com/microsoft/healthcare-apis-samples).
-> [!Note]
+> [!Note]
> To perform the role assignment operation, the user (or the client application) must be granted with RBAC permissions. Contact your Azure subscription administrators for assistance. ## Role assignments with CLI
-You can list application roles using role names or GUID IDs. Include the role name in double quotes when there are spaces in it. For more information, see
+You can list application roles by using role names or GUID IDs. Include the role name in double quotes when there are spaces in it. For more information, see
[List Azure role definitions](./../role-based-access-control/role-definitions-list.yml#azure-cli). ```
az role definition list --name 58a3b984-7adf-4c20-983a-32417c86fbc8
### Azure Health Data Services role assignment
-The role assignments for Azure Health Data Services require the following values.
+The role assignments for Azure Health Data Services require these values:
- Application role name or GUID ID. - Service principal ID for the user or client application.
spid=$(az ad sp show --id $clientid --query objectId --output tsv)
#assign the specified role az role assignment create --assignee-object-id $spid --assignee-principal-type ServicePrincipal --role "$fhirrole" --scope $fhirrolescope ```+ ## Role assignments with REST API Alternatively, you can send a Put request to the role assignment REST API directly. For more information, see [Assign Azure roles using the REST API](./../role-based-access-control/role-assignments-rest.md). >[!Note]
->The REST API scripts in this article are based on the [REST Client](./fhir/using-rest-client.md) extension. You'll need to revise the variables if you are in a different environment.
+>The REST API scripts in this article are based on the [REST Client](./fhir/using-rest-client.md) extension. You need to revise the variables if you are in a different environment.
-The API requires the following values:
+The API requires these values:
- Assignment ID, which is a GUID value that uniquely identifies the transaction. You can use tools such as Visual Studio or Visual Studio Code extension to get a GUID value. Also, you can use online tools such as [UUID Generator](https://www.uuidgenerator.net/api/guid) to get it.-- API version that is supported by the API.
+- API version supported by the API.
- Scope for Azure Health Data Services to which you grant access permissions. It includes subscription ID, resource group name, and the FHIR or DICOM service instance name.-- Role definition ID for roles such as "FHIR Data Contributor" or "DICOM Data Owner". Use `az role definition list --name "<role name>"` to list the role definition IDs.
+- Role definition ID for roles such as **FHIR Data Contributor** or **DICOM Data Owner**. Use `az role definition list --name "<role name>"` to list the role definition IDs.
- Service principal ID for the user or the client application. - Microsoft Entra access token to the `https://management.azure.com/`, not Azure Health Data Services. You can get the access token using an existing tool or using Azure CLI command, `az account get-access-token --resource "https://management.azure.com/"` - For Azure Health Data Services, the scope includes workspace name and FHIR/DICOM service instance name.
Accept: application/json
} ```
-For Azure API for FHIR, the scope is defined slightly differently as it supports the FHIR service only, and no workspace name is required.
+For Azure API for FHIR, the scope is defined differently as it supports the FHIR service only, and no workspace name is required.
```rest ### Create a role assignment - Azure API for FHIR
Accept: application/json
## List service instances of Azure Health Data Services
-Optionally, you can get a list of Azure Health Data Services services, or Azure API for FHIR. Note that the API version is based on Azure Health Data Services, not the version for the role assignment REST API.
+Optionally, you can get a list of Azure Health Data Services services, or Azure API for FHIR. The API version is based on Azure Health Data Services, not the version for the role assignment REST API.
For Azure Health Data Services, specify the subscription ID, resource group name, workspace name, FHIR or DICOM services, and the API version.
Accept: application/json
```
-Now that you've granted proper permissions to the client application, you can access Azure Health Data Services in your applications.
+After you grant proper permissions to the client application, you can access Azure Health Data Services in your applications.
## Next steps
-In this article, you learned how to grant permissions to client applications using Azure CLI and REST API. For information on how to access Azure Health Data Services using the REST Client Extension in Visual Studio Code, see
-
->[!div class="nextstepaction"]
->[Access using REST Client](./fhir/using-rest-client.md)
+[Access using REST Client](./fhir/using-rest-client.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md
Title: Configure Azure RBAC role for FHIR service - Azure Health Data Services
-description: This article describes how to configure Azure RBAC role for FHIR.
+ Title: Configure Azure RBAC role for the FHIR service in Azure Health Data Services
+description: Learn how to configure Azure RBAC for the FHIR service in Azure Health Data Services. Assign roles, manage access, and safeguard your data plane.
Last updated 06/06/2022
+# Configure Azure RBAC roles for Azure Health Data Services
-# Configure Azure RBAC role for Azure Health Data Services
+In this article, you learn how to use [Azure role-based access control (RBAC)](../role-based-access-control/index.yml) to assign access to the Azure Health Data Services data plane. Using Azure RBAC roles is the preferred method for assigning data plane access when data plane users are managed in the Microsoft Entra tenant associated with your Azure subscription.
-In this article, you'll learn how to use [Azure role-based access control (Azure RBAC role)](../role-based-access-control/index.yml) to assign access to the Azure Health Data Services data plane. Azure RBAC role is the preferred methods for assigning data plane access when data plane users are managed in the Microsoft Entra tenant associated with your Azure subscription.
-
-You can complete role assignments through the Azure portal. Note that the FHIR service and DICOM service have defined different application roles. Add or remove one or more roles to manage user access controls.
+You can complete role assignments in the Azure portal. The FHIR&reg; service and DICOM&reg; service define application roles differently. Add or remove one or more roles to manage user access controls.
## Assign roles for the FHIR service
-To grant users, service principals, or groups access to the FHIR data plane, select the FHIR service from the Azure portal. Select **Access control (IAM)**, and then select the **Role assignments** tab. Select **+Add**, and then select **Add role assignment**.
-
-If the role assignment option is grayed out, ask your Azure subscription administrator to grant you with the permissions to the subscription or the resource group, for example, ΓÇ£User Access AdministratorΓÇ¥. For more information about the Azure built-in roles, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
+To grant users, service principals, or groups access to the FHIR data plane, go to the FHIR service in the Azure portal. Select **Access control (IAM)**, and then select the **Role assignments** tab. Select **+Add**, and then select **Add role assignment**.
+
+If the role assignment option is grayed out, ask your Azure subscription administrator to grant you with the permissions to the subscription or the resource group, for example, **User Access Administrator**. For more information, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
-[ ![Access control role assignment.](fhir/media/rbac/role-assignment.png) ](fhir/media/rbac/role-assignment.png#lightbox)
-In the Role selection, search for one of the built-in roles for the FHIR data plane, for example, ΓÇ£FHIR Data ContributorΓÇ¥. You can choose other roles below.
+In the **Role** selection, search for one of the built-in roles for the FHIR data plane. You can choose from these roles:
* **FHIR Data Reader**: Can read (and search) FHIR data. * **FHIR Data Writer**: Can read, write, and soft delete FHIR data. * **FHIR Data Exporter**: Can read and export ($export operator) data. * **FHIR Data Contributor**: Can perform all data plane operations. * **FHIR Data Converter**: Can use the converter to perform data conversion.
-* **FHIR SMART User**: Role allows to read and write FHIR data according to the SMART IG V1.0.0 specifications.
+* **FHIR SMART User**: Can read and write FHIR data according to the SMART IG V1.0.0 specifications.
-In the **Select** section, type the client application registration name. If the name is found, the application name is listed. Select the application name, and then select **Save**.
+In the **Select** section, type the client application registration name. If the name is found, the application name is listed. Select the application name, and then select **Save**.
If the client application isnΓÇÖt found, check your application registration. This is to ensure that the name is correct. Ensure that the client application is created in the same tenant where the FHIR service in Azure Health Data Services (hereby called the FHIR service) is deployed in. -
-[ ![Select role assignment.](fhir/media/rbac/select-role-assignment.png) ](fhir/media/rbac/select-role-assignment.png#lightbox)
You can verify the role assignment by selecting the **Role assignments** tab from the **Access control (IAM)** menu option.
-
+ ## Assign roles for the DICOM service To grant users, service principals, or groups access to the DICOM data plane, select the **Access control (IAM)** blade. Select the**Role assignments** tab, and select **+ Add**.
-[ ![dicom access control.](dicom/media/dicom-access-control.png) ](dicom/media/dicom-access-control.png#lightbox)
In the **Role** selection, search for one of the built-in roles for the DICOM data plane:
-[ ![Add RBAC role assignment.](dicom/media/rbac-add-role-assignment.png) ](dicom/media/rbac-add-role-assignment.png#lightbox)
You can choose between: * DICOM Data Owner: Full access to DICOM data. * DICOM Data Reader: Read and search DICOM data.
-If these roles arenΓÇÖt sufficient for your need, you can use PowerShell to create custom roles. For information about creating custom roles, see [Create a custom role using Azure PowerShell](../role-based-access-control/custom-roles-powershell.md).
+If these roles arenΓÇÖt sufficient, you can use PowerShell to create custom roles. For information about creating custom roles, see [Create a custom role by using Azure PowerShell](../role-based-access-control/custom-roles-powershell.md).
In the **Select** box, search for a user, service principal, or group that you want to assign the role to.
In the **Select** box, search for a user, service principal, or group that you w
## Next steps
-In this article, you've learned how to assign Azure roles for the FHIR service and DICOM service. To learn how to access the Azure Health Data Services using Postman, see
+[Access by using Postman](./fhir/use-postman.md)
+
+[Access by using the REST Client](./fhir/using-rest-client.md)
-- [Access using Postman](./fhir/use-postman.md)-- [Access using the REST Client](./fhir/using-rest-client.md)-- [Access using cURL](./fhir/using-curl.md)
+[Access by using cURL](./fhir/using-curl.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Get Started With Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/get-started-with-fhir.md
Title: Get started with FHIR service - Azure Health Data Services
-description: This document describes how to get started with FHIR service in Azure Health Data Services.
+ Title: Get started with the FHIR service in Azure Health Data Services
+description: Learn how to set up the FHIR service in Azure Health Data Services with steps to create workspaces, register apps, and manage data.
-# Get started with FHIR service
+# Get started with the FHIR service
-This article outlines the basic steps to get started with the FHIR service in [Azure Health Data Services](../healthcare-apis-overview.md).
+This article outlines the basic steps to get started with the FHIR&reg; service in [Azure Health Data Services](../healthcare-apis-overview.md).
As a prerequisite, you need an Azure subscription and permissions to create Azure resource groups and deploy Azure resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in PowerShell, Azure CLI, and REST API scripts.
You can delete a client application. Before you delete a client application, ens
### Grant access permissions
-You can grant access permissions or assign roles from the [Azure portal](../configure-azure-rbac.md), or using PowerShell and Azure CLI scripts.
+You can grant access permissions or assign roles in the [Azure portal](../configure-azure-rbac.md), or by using PowerShell and Azure CLI scripts.
### Perform create, read, update, and delete (CRUD) transactions
-You can perform Create, Read (search), Update, and Delete (CRUD) transactions against the FHIR service in your applications or by using tools such as Postman, REST Client, and cURL. Because the FHIR service is secured by default, you must obtain an access token and include it in your transaction request.
+You can perform Create, Read (search), Update, and Delete (CRUD) transactions against the FHIR service in your applications or by using tools such as Postman, REST Client, and cURL. Because the FHIR service is secured by default, you need to obtain an access token and include it in your transaction request.
#### Get an access token
-You can obtain a Microsoft Entra access token using PowerShell, Azure CLI, REST CCI, or .NET SDK. For more information, see [Get access token](../get-access-token.md).
+You can obtain a Microsoft Entra access token by using PowerShell, Azure CLI, REST CCI, or .NET SDK. For more information, see [Get an access token](../get-access-token.md).
#### Access using existing tools
You can obtain a Microsoft Entra access token using PowerShell, Azure CLI, REST
#### Load data
-You can load data directly using the POST or PUT method against the FHIR service. To bulk load data, you can use $import operation. For information, visit [import operation](import-data.md).
+You can load data directly by using the POST or PUT method against the FHIR service. To bulk load data, you can use $import operation. For information, visit [import operation](import-data.md).
### CMS, search, profile validation, and reindex
You can find more details on interoperability and patient access, search, profil
### Export data
-Optionally, you can export ($export) data to [Azure Storage](../data-transformation/export-data.md) and use it in your analytics or machine-learning projects. You can export the data "as-is" or [deid](../data-transformation/de-identified-export.md) in `ndjson` format.
+Optionally, you can export ($export) data to [Azure Storage](../data-transformation/export-data.md) and use it in your analytics or machine-learning projects. You can export the data "as-is" or [deID](../data-transformation/de-identified-export.md) in `ndjson` format.
-### Converting data
+### Convert data
-Optionally, you can convert [HL7 v2](convert-data-overview.md) and other format data to FHIR.
+Optionally, you can convert [HL7 v2](convert-data-overview.md) data and other formats to FHIR.
### Using FHIR data in Power BI dashboard
Optionally, you can create Power BI dashboard reports with FHIR data.
## Next steps
-[Deploy a FHIR service within Azure Health Data Services](fhir-portal-quickstart.md)
+[Deploy a FHIR service in Azure Health Data Services](fhir-portal-quickstart.md)
[!INCLUDE [FHIR trademark statement](../includes/healthcare-apis-fhir-trademark.md)]
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Title: What is the FHIR service in Azure Health Data Services?
-description: The FHIR service enables rapid exchange of health data through FHIR APIs. Ingest, manage, and persist Protected Health Information (PHI) with a managed cloud service.
+description: Discover the FHIR service in Azure Health Data Services for secure, compliant, and scalable health data exchange and management in the cloud
# What is the FHIR service?
-The FHIR service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. As part of a managed Platform-as-a-Service (PaaS), the FHIR service makes it easy for anyone working with health data to securely store and exchange Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
+The FHIR&reg; service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. As part of a managed Platform-as-a-Service (PaaS), the FHIR service makes it easy for anyone working with health data to securely store and exchange Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
-The FHIR service offers the following:
+The FHIR service offers:
-- Managed FHIR-compliant server, provisioned in the cloud in minutes
+- Managed FHIR-compliant server, provisioned in the cloud in minutes
- Enterprise-grade FHIR API endpoint for FHIR data access and storage - High performance, low latency - Secure management of Protected Health Information (PHI) in a compliant cloud environment
The FHIR service offers the following:
- Controlled access to FHIR data at scale with Microsoft Entra role-Based Access Control (RBAC) - Audit log tracking for access, creation, and modification events within the FHIR service data store
-The FHIR service allows you to quickly create and deploy a FHIR server to leverage the elastic scale of the cloud for ingesting, persisting, and querying FHIR data. The Azure services that power the FHIR service are designed for high performance no matter how much data you're working with.
+The FHIR service allows you to quickly create and deploy a FHIR server to the cloud for ingesting, persisting, and querying FHIR data. The Azure services that power the FHIR service are designed for high performance no matter how much data you're working with.
-The FHIR API provisioned in the FHIR service enables any FHIR-compliant system to securely connect and interact with FHIR data. As a PaaS offering, Microsoft takes on the operations, maintenance, update, and compliance requirements for the FHIR service so you can free up your own operational and development resources.
+The FHIR API in the FHIR service enables any FHIR-compliant system to securely connect and interact with FHIR data. As a PaaS offering, Microsoft takes on the operations, maintenance, update, and compliance requirements for the FHIR service so you can free up your own operational and development resources.
-## Leveraging the power of your data with FHIR
+## Leverage the power of health data
The healthcare industry is rapidly adopting [FHIR®](https://hl7.org/fhir) as the industry-wide standard for health data storage, querying, and exchange. FHIR provides a robust, extensible data model with standardized semantics that all FHIR-compliant systems can use interchangeably. With FHIR, organizations can unify disparate electronic health record systems (EHRs) and other health data repositories – allowing all data to be persisted and exchanged in a single, universal format. With the addition of SMART on FHIR, user-facing mobile and web-based applications can securely interact with FHIR data – opening a new range of possibilities for patient and provider access to PHI. Most of all, FHIR simplifies the process of assembling large health datasets for research – enabling researchers and clinicians to apply machine learning and analytics at scale for gaining new health insights.
The healthcare industry is rapidly adopting [FHIR®](https://hl7.org/fhir) as th
The FHIR service in Azure Health Data Services makes FHIR data available to clients through a RESTful API. This API is an implementation of the HL7 FHIR API specification. As a managed PaaS offering in Azure, the FHIR service gives organizations a scalable and secure environment for the storage and exchange of Protected Health Information (PHI) in the native FHIR format.
-### Free up your resources to innovate
+### Free up resources to innovate
-You could invest resources building and maintaining your own FHIR server, but with the FHIR service in Azure Health Data Services, Microsoft handles setting up the server's components, ensuring all compliance requirements are met so you can focus on building innovative solutions.
+Although you can build and maintain your own FHIR server, with the FHIR service in Azure Health Data Services Microsoft handles setting up server components, ensuring all compliance requirements are met so you can focus on building innovative solutions.
-### Enable interoperability with FHIR
+### Enable interoperability
-The FHIR service enables connection with any health data system or application capable of sending FHIR API requests. Coupled with other parts of the Azure ecosystem, the FHIR service forms a link between electronic health records systems (EHRs) and Azure's powerful suite of data analytics and machine learning tools ΓÇô enabling organizations to build patient and provider-facing applications that harness the full power of the Microsoft cloud.
+The FHIR service enables connection with any health data system or application capable of sending FHIR API requests. Along with other parts of the Azure ecosystem, the FHIR service forms a link between electronic health records systems (EHRs) and Azure's suite of data analytics and machine learning tools, enabling organizations to build patient and provider applications that harness the full power of the Microsoft cloud.
-### Control Data Access at Scale
+### Control data access at scale
-With the FHIR service, you control your data ΓÇô at scale. The FHIR service's Role-Based Access Control (RBAC) is rooted in Microsoft Entra identities management, which means you can grant or deny access to health data based on the roles given to individuals in your organization. These RBAC settings for the FHIR service are configurable in Azure Health Data Services at the workspace level. This simplifies system management and guarantees your organization's PHI is safe within a HIPAA and HITRUST-compliant environment.
+With the FHIR service, you control health data at scale. The FHIR service's role-based access control (RBAC) is based on Microsoft Entra identities management. You can grant or deny access to health data based on the roles given to individuals in your organization. The RBAC settings for the FHIR service are configurable in Azure Health Data Services at the workspace level. Workspaces simplify system management and help ensure your organization's PHI is safe within a HIPAA and HITRUST-compliant environment.
-### Secure your data
+### Secure healthcare data
-As part of the Azure family of services, the FHIR service protects your organization's PHI with an unparalleled level of security. In Azure Health Data Services, your FHIR data is isolated to a unique database per FHIR service instance and protected with multi-region failover. On top of this, FHIR service implements a layered, in-depth defense and advanced threat detection for your data ΓÇô giving you peace of mind that your organization's PHI is guarded by Azure's industry-leading security.
+Because it belongs to the Azure family of services, the FHIR service protects your organization's PHI with an unparalleled level of security. In Azure Health Data Services, FHIR data is isolated to a unique database per FHIR service instance and protected with multi-region failover. Plus, the FHIR service implements a layered, in-depth defense and advanced threat detection for health data.
-## Applications for the FHIR service
+## Use cases for the FHIR service
-FHIR servers are essential for interoperability of health data. The FHIR service is designed as a managed FHIR server with a RESTful API for connecting to a broad range of client systems and applications. Some of the key use cases for the FHIR service are listed below:
+FHIR servers are essential for interoperability of health data. The FHIR service is designed as a managed FHIR server with a RESTful API for connecting to a broad range of client systems and applications. Some of the key use cases for the FHIR service are:
-- **Startup App Development:** Customers developing a patient- or provider-centric app (mobile or web) can leverage FHIR service as a fully managed backend for health data transactions. The FHIR service enables secure transfer of PHI, and with SMART on FHIR, app developers can take advantage of the robust identities management in Microsoft Entra ID for authorization of FHIR RESTful API actions.
+- **Startup app development:** Customers developing a patient- or provider-centric app (mobile or web) can use the FHIR service as a fully managed backend for health data transactions. The FHIR service enables secure transfer of PHI. With SMART on FHIR, app developers can take advantage of the robust identities management in Microsoft Entra ID for authorization of FHIR RESTful API actions.
-- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another (often because the data is stored in different formats). Utilizing the FHIR service as a conversion layer between these systems allows organizations to standardize data in the FHIR format. Ingesting and persisting in FHIR enables health data querying and exchange across multiple disparate systems.
+- **Healthcare ecosystems:** Although EHRs are the primary source of truth in many clinical settings, it's common for providers to have multiple databases that arenΓÇÖt connected to each other (often because the data is stored in different formats). By using the FHIR service as a conversion layer between these systems, organizations can standardize data in the FHIR format. Ingesting and persisting in FHIR format enables health data querying and exchange across multiple disparate systems.
-- **Research:** Health researchers have embraced the FHIR standard as it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the FHIR service's data conversion and PHI de-identification capabilities, researchers can prepare HIPAA-compliant data for secondary use before sending the data to Azure Machine Learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.
+- **Research:** Health researchers use the FHIR standard because it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the data conversion and PHI deidentification capabilities in the FHIR service, researchers can prepare HIPAA-compliant data for secondary use before sending the data to Azure Machine Learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.
## FHIR platforms from Microsoft FHIR capabilities from Microsoft are available in three configurations:
-* The **FHIR service** is a managed platform as a service (PaaS) that operates as part of Azure Health Data Services. In addition to the FHIR service, Azure Health Data Services includes managed services for other types of health data such as the DICOM service for medical imaging data and the MedTech service for medical IoT data. All services (FHIR service, DICOM service, and MedTech service) can be connected and administered within an Azure Health Data Services workspace.
-* **Azure API for FHIR** is a managed FHIR server offered as a PaaS in Azure ΓÇô easily provisioned in the Azure portal. Azure API for FHIR is not part of Azure Health Data Services and lacks some of the features of the FHIR service.
-* **FHIR Server for Azure**, an open-source FHIR server that can be deployed into your Azure subscription, is available on GitHub at https://github.com/Microsoft/fhir-server.
+- The **FHIR service** is a managed platform as a service (PaaS) that operates as part of Azure Health Data Services. In addition to the FHIR service, Azure Health Data Services includes managed services for other types of health data, such as the DICOM service for medical imaging data and the MedTech service for medical IoT data. All services (FHIR service, DICOM service, and MedTech service) can be connected and administered within an Azure Health Data Services workspace.
-For use cases that require customizing a FHIR server with admin access to the underlying services (e.g., access to the database without going through the FHIR API), developers should choose the open-source FHIR Server for Azure. For implementation of a turnkey, production-ready FHIR API with a provisioned database backend (i.e., data can only be accessed through the FHIR API - not the database directly), developers should choose the FHIR service.
+- **Azure API for FHIR** is a managed FHIR server offered as a PaaS in Azure and is easily deployed in the Azure portal. Azure API for FHIR isn't part of Azure Health Data Services and lacks some of the features of the FHIR service.
-## Next Steps
+- **FHIR server for Azure** is an open-source FHIR server that can be deployed into your Azure subscription. It's available on GitHub at https://github.com/Microsoft/fhir-server.
-To start working with the FHIR service, follow the 5-minute quickstart instructions for FHIR service deployment.
+For use cases that require customizing a FHIR server with admin access to the underlying services (for example, access to the database without going through the FHIR API), developers should choose the open-source FHIR Server for Azure. For implementation of a turnkey, production-ready FHIR API with a provisioned database backend (data can only be accessed through the FHIR API, not the database directly), developers should choose the FHIR service.
->[!div class="nextstepaction"]
->[Deploy FHIR service](fhir-portal-quickstart.md)
+## Next steps
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+[Deploy the FHIR service](fhir-portal-quickstart.md)
+
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-quotas-limits.md
Title: Azure IoT Central quotas and limits
description: This article lists the key quotas and limits that apply to an IoT Central application including from the underlying DPS and IoT Hub services. Previously updated : 10/26/2023 Last updated : 06/17/2024
There are various quotas and limits that apply to IoT Central applications. IoT
## Data export
-| Item | Quota or limit | Notes |
-| - | -- | -- |
-| Number of data export jobs | 10 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
-| Number of data export destinations | 10 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
-| Number of data export destinations per job | 10 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
-| Number of filters and enrichments per data export job | 10 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
+| Item | Quota or limit |
+| - | -- |
+| Number of data export jobs | 10 |
+| Number of data export destinations | 10 |
+| Number of data export destinations per job | 10 |
+| Number of filters and enrichments per data export job | 10 |
For large volumes of export data, you may experience up to 60 seconds of latency. Typically, the latency is much lower than this.
iot-central Iot Central Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-customer-data-requests.md
Azure IoT Central is a fully managed Internet of Things (IoT) software-as-a-service solution that makes it easy to connect, monitor, and manage your IoT assets at scale, create deep insights from your IoT data, and take informed action. ## Identifying customer data
iot-dps Quick Enroll Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-tpm.md
Although these steps work on both Windows and Linux computers, this article uses
## Prerequisites
-* [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+* [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* Complete the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md).
iot-dps Quick Setup Auto Provision Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-bicep.md
You can use a [Bicep](../azure-resource-manager/bicep/overview.md) file to programmatically set up the Azure cloud resources necessary for provisioning your devices. These steps show how to create an IoT hub and a new IoT Hub Device Provisioning Service instance with a Bicep file. The IoT Hub is also linked to the DPS resource using the Bicep file. This linking allows the DPS resource to assign devices to the hub based on allocation policies you configure. This quickstart uses [Azure PowerShell](../azure-resource-manager/bicep/deploy-powershell.md), and the [Azure CLI](../azure-resource-manager/bicep/deploy-cli.md) to perform the programmatic steps necessary to create a resource group and deploy the Bicep file, but you can easily use .NET, Ruby, or other programming languages to perform these steps and deploy your Bicep file. ## Prerequisites [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] ## Review the Bicep file
iot-dps Quick Setup Auto Provision Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-cli.md
The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart details using the Azure CLI to create an IoT hub and an IoT Hub Device Provisioning Service instance, and to link the two services together. > [!IMPORTANT] > Both the IoT hub and the provisioning service you create in this quickstart will be publicly discoverable as DNS endpoints. Make sure to avoid any sensitive information if you decide to change the names used for these resources.
iot-dps Quick Setup Auto Provision Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-rm.md
You can use an [Azure Resource Manager](../azure-resource-manager/management/overview.md) template (ARM template) to programmatically set up the Azure cloud resources necessary for provisioning your devices. These steps show how to create an IoT hub and a new IoT Hub Device Provisioning Service with an ARM template. The Iot Hub is also linked to the DPS resource using the template. This linking allows the DPS resource to assign devices to the hub based on allocation policies you configure. This quickstart uses [Azure portal](../azure-resource-manager/templates/deploy-portal.md) and the [Azure CLI](../azure-resource-manager/templates/deploy-cli.md) to perform the programmatic steps necessary to create a resource group and deploy the template. However, you can also use [PowerShell](../azure-resource-manager/templates/deploy-powershell.md), .NET, Ruby, or other programming languages to perform these steps and deploy your template.
If your environment meets the prerequisites, and you're already familiar with us
:::image type="content" source="~/reusable-content/ce-skilling/azure/media/template-deployments/deploy-to-azure-button.svg" alt-text="Button to deploy the Resource Manager template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2Fquickstarts%2Fmicrosoft.devices%2Fiothub-device-provisioning%2fazuredeploy.json"::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
iot-dps Tutorial Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-allocation-policies.md
In this tutorial, you'll do the following:
> * Set up the development environment for the Azure IoT C SDK. > * Simulate the devices and verify that they are provisioned according to the example code in the custom allocation policy. ## Prerequisites
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md
In this tutorial, you learn how to:
![Diagram - Tutorial architecture, stage and deploy classifier](./media/tutorial-deploy-custom-vision/custom-vision-architecture.png) </center> ## Prerequisites
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md
You can use Azure Functions to deploy code that implements your business logic d
The Azure Function that you create in this tutorial filters the temperature data that's generated by your device. The Function only sends messages upstream to Azure IoT Hub when the temperature is above a specified threshold. ## Prerequisites
iot-edge Tutorial Deploy Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-stream-analytics.md
Azure Stream Analytics provides a richly structured query syntax for data analys
## Prerequisites * An Azure IoT Edge device.
iot-edge Tutorial Develop For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux-on-windows.md
Cloud resources:
* A free or standard-tier [IoT hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. ## Key concepts
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
Cloud resources:
* A free or standard-tier [IoT hub](../iot-hub/iot-hub-create-through-portal.md) in Azure. > [!TIP] > For guidance on interactive debugging in Visual Studio Code or Visual Studio 2022:
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-store-data-sql-server.md
In this tutorial, you learn how to:
> * Use Visual Studio Code to build modules and deploy them to your IoT Edge device > * View generated data ## Prerequisites
iot-hub Horizontal Arm Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/horizontal-arm-route-messages.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to create an IoT hub, an Azure Storage account, and a route to send messages from the IoT hub to storage. The hub is configured so the messages sent to the hub are automatically routed to the storage account if they meet the routing condition. At the end of this quickstart, you can open the storage account and see the messages sent. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
iot-hub Iot Hub Configure File Upload Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-configure-file-upload-powershell.md
This article shows you how to configure file uploads on your IoT hub using Power
To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.md), you must first associate an Azure storage account and blob container with your IoT hub. IoT Hub automatically generates SAS URIs with write permissions to this blob container for devices to use when they upload files. In addition to the storage account and blob container, you can set the time-to-live for the SAS URI and configure settings for the optional file upload notifications that IoT Hub can deliver to backend services. ## Prerequisites
iot-hub Iot Hub Create Use Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-use-iot-toolkit.md
Last updated 01/04/2019
This article shows you how to use the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) to create an Azure IoT hub. ## Prerequisites
iot-hub Iot Hub Create Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-powershell.md
You can use Azure PowerShell cmdlets to create and manage Azure IoT hubs. This tutorial shows you how to create an IoT hub with PowerShell. Alternatively, you can use Azure Cloud Shell, if you'd rather not install additional modules onto your machine. The following section gets you started with Azure Cloud Shell. ## Prerequisites
iot-hub Iot Hub Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-customer-data-requests.md
The Azure IoT Hub is a REST API-based cloud service targeted at enterprise customers that enables secure, bi-directional communication between millions of devices and a partitioned Azure service. Individual devices are assigned a device identifier (device ID) by a tenant administrator. Device data is based on the assigned device ID. Microsoft maintains no information and has no access to data that would allow device ID to user correlation.
iot-hub Iot Hub Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ip-filtering.md
Here, `<ipFilterIndexToRemove>` must correspond to the ordering of IP filters in
## Retrieve and update IP filters using Azure PowerShell Your IoT Hub's IP filters can be retrieved and set through [Azure PowerShell](/powershell/azure/).
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
You can use the [IoT Hub Resource](/rest/api/iothub/iothubresource) REST API to create and manage Azure IoT hubs programmatically. This article shows you how to use the IoT Hub Resource to create an IoT hub using **Postman**. Alternatively, you can use **cURL**. If any of these REST commands fail, find help with the [IoT Hub API common error codes](/rest/api/iothub/common-error-codes). ## Prerequisites
iot-hub Quickstart Bicep Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-bicep-route-messages.md
In this quickstart, you use Bicep to create an IoT hub, an Azure Storage account, and a route to send messages from the IoT hub to storage. The hub is configured so the messages sent to the hub are automatically routed to the storage account if they meet the routing condition. At the end of this quickstart, you can open the storage account and see the messages sent. ## Prerequisites
iot-operations Howto Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-connectivity/howto-configure-authentication.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 6/27/2023 #CustomerIntent: As an operator, I want to configure authentication so that I have secure MQTT broker communications.
Each client has the following required properties:
- Password ([PBKDF2 encoded](https://en.wikipedia.org/wiki/PBKDF2)) - [Attributes for authorization](./howto-configure-authorization.md)
-For example, start with a `clients.toml` with identities and PBKDF2 encoded passwords.
+For example, start with a `passwords.toml` with identities and PBKDF2 encoded passwords.
```toml # Credential #1
floor = "floor2"
site = "site1" ```
-To encode the password using PBKDF2, use the [Azure IoT Operations CLI extension](/cli/azure/iot/ops) that includes the `az iot ops mq get-password-hash` command. It generates a PBKDF2 password hash from a password phrase using the SHA-512 algorithm and a 128-bit randomized salt.
-
-```bash
-az iot ops mq get-password-hash --phrase TestPassword
-```
-
-The output shows the PBKDF2 password hash to copy:
-
-```json
-{
- "hash": "$pbkdf2-sha512$i=210000,l=64$4SnaHtmi7m++00fXNHMTOQ$rPT8BWv7IszPDtpj7gFC40RhhPuP66GJHIpL5G7SYvw+8rFrybyRGDy+PVBYClmdHQGEoy0dvV+ytFTKoYSS4A"
-}
-```
-
-Then, save the file as `passwords.toml` and import it into a Kubernetes secret under that key.
+Then, import it into a Kubernetes secret under that key.
```bash kubectl create secret generic passwords-db --from-file=passwords.toml -n azure-iot-operations ```
-Include a reference to the secret in the *BrokerAuthentication* custom resource
+Include a reference to the secret in the *BrokerAuthentication* custom resource.
```yaml spec:
spec:
secretName: passwords-db ```
-It might take a few minutes for the changes to take effect.
+To encode the password using PBKDF2, use the [Azure IoT Operations CLI extension](/cli/azure/iot/ops) that includes the `az iot ops mq get-password-hash` command. It generates a PBKDF2 password hash from a password phrase using the SHA-512 algorithm and a 128-bit randomized salt.
+
+```bash
+az iot ops mq get-password-hash --phrase TestPassword
+```
+
+The output shows the PBKDF2 password hash to copy:
+
+```json
+{
+ "hash": "$pbkdf2-sha512$i=210000,l=64$4SnaHtmi7m++00fXNHMTOQ$rPT8BWv7IszPDtpj7gFC40RhhPuP66GJHIpL5G7SYvw+8rFrybyRGDy+PVBYClmdHQGEoy0dvV+ytFTKoYSS4A"
+}
+```
You can use Azure Key Vault to manage secrets for Azure IoT MQ instead of Kubernetes secrets. To learn more, see [Manage secrets using Azure Key Vault or Kubernetes secrets](../manage-mqtt-connectivity/howto-manage-secrets.md).
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault you may review the [Overview](../general/overview.md). Azure CLI is used to create and manage Azure resources using commands or scripts. Once that you have completed that, you will store a certificate. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
In this quickstart, you create a key vault in Azure Key Vault with Azure PowerSh
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
key-vault Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/customer-data.md
Azure Key Vault receives customer data during creation or update of vaults, mana
System access logs are generated when a user or application accesses Key Vault. Detailed access logs are available to customers using Azure Insights. ## Identifying customer data
key-vault How To Azure Key Vault Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/how-to-azure-key-vault-network-security.md
Here's how to configure Key Vault firewalls and virtual networks by using the Az
# [PowerShell](#tab/azure-powershell) Here's how to configure Key Vault firewalls and virtual networks by using PowerShell:
key-vault Move Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/move-subscription.md
# Moving an Azure Key Vault to another subscription ## Overview
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-cli.md
ms.devlang: azurecli
Azure Key Vault is a cloud service that provides a secure store for [keys](../keys/index.yml), [secrets](../secrets/index.yml), and [certificates](../certificates/index.yml). For more information on Key Vault, see [About Azure Key Vault](overview.md); for more information on what can be stored in a key vault, see [About keys, secrets, and certificates](about-keys-secrets-certificates.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-powershell.md
Azure Key Vault is a cloud service that provides a secure store for [keys](../ke
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. In this quickstart, you create a key vault with [Azure PowerShell](/powershell/azure/). If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
key-vault Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/service-limits.md
# Azure Key Vault service limits
key-vault Vault Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/vault-create-template.md
[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets like keys, passwords, and certificates. This article describes the process for deploying an Azure Resource Manager template (ARM template) to create a key vault. ## Prerequisites
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-ncipher.md
> [!WARNING] > The HSM-key import method described in this document is **deprecated** and will not be supported after June 30, 2021. It only works with nCipher nShield family of HSMs with firmware 12.40.2 or newer. Using [new method to import HSM-keys](hsm-protected-keys-byok.md) is strongly recommended. For added assurance, when you use Azure Key Vault, you can import or generate keys in hardware security modules (HSMs) that never leave the HSM boundary. This scenario is often referred to as *bring your own key*, or BYOK. Azure Key Vault uses nCipher nShield family of HSMs (FIPS 140-2 Level 2 validated) to protect your keys.
key-vault Javascript Developer Guide Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/javascript-developer-guide-get-started.md
Before programmatically authenticating to Azure to use Azure Key Vault keys, mak
#### [Developer authentication](#tab/developer-auth) #### [Production authentication](#tab/production-auth)
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-bicep.md
Last updated 01/30/2024
[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets, such as keys, passwords, and certificate. This quickstart focuses on the process of deploying a Bicep file to create a key vault and a key. ## Prerequisites
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, review the [Overview](../general/overview.md). Azure CLI is used to create and manage Azure resources using commands or scripts. Once that you've completed that, you will store a key. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-powershell.md
In this quickstart, you create a key vault in Azure Key Vault with Azure PowerSh
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 1.0.0 or later. Type `$PSVersionTable.PSVersion` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
In this quickstart, you created a Key Vault and stored a certificate in it. To l
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the reference for the [Azure PowerShell Key Vault cmdlets](/powershell/module/az.keyvault/) - Review the [Key Vault security overview](../general/security-features.md)
-.md)
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
You must provide the following information to execute a full backup:
- Storage account blob storage container - User assigned managed identity OR storage container SAS token with permissions 'crdw' #### Prerequisites if backing up and restoring using user assigned managed identity:
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
To use the Azure CLI commands in this article, you must have the following items
* The Azure CLI version 2.12.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli). * A managed HSM the [supported HSMs list](#supported-hsms) in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. To sign in to Azure using the CLI, type:
key-vault Key Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/key-management.md
To complete the steps in this article, you must have the following items:
* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli). * A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. ## Sign in to Azure
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/logging.md
To complete the steps in this article, you must have the following items:
* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli). * A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. ## Connect to your Azure subscription
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md
To complete the steps in this article, you must have:
* A subscription to Microsoft Azure. If you do not have one, you can sign up for a [free trial](https://azure.microsoft.com/pricing/free-trial). * The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli). ## Sign in to Azure
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an Azure Key Vault managed HSM. Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguards cryptographic keys for your cloud applications, using **FIPS 140-2 Level 3** validated HSMs. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
key-vault Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/role-management.md
To use the Azure CLI commands in this article, you must have the following items
* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli). * A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. ## Sign in to Azure
key-vault Secure Your Managed Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/secure-your-managed-hsm.md
To complete the steps in this article, you must have the following items:
* The Azure CLI version 2.25.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli). * A managed HSM in your subscription. See [Quickstart: Provision and activate a managed HSM using Azure CLI](quick-create-cli.md) to provision and activate a managed HSM. ## Sign in to Azure
key-vault Javascript Developer Guide Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/javascript-developer-guide-get-started.md
Before programmatically authenticating to Azure to use Azure Key Vault secrets,
#### [Developer authentication](#tab/developer-auth) #### [Production authentication](#tab/production-auth)
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
When you use the managed storage account key feature, consider the following poi
> [!IMPORTANT] > Regenerating key directly in storage account breaks managed storage account setup and can invalidate SAS tokens in use and cause an outage. ## Service principal application ID
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-bicep.md
[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets, such as keys, passwords, certificates, and other secrets. This quickstart focuses on the process of deploying a Bicep file to create a key vault and a secret. ## Prerequisites
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault you may review the [Overview](../general/overview.md). Azure CLI is used to create and manage Azure resources using commands or scripts. Once that you have completed that, you will store a secret. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-powershell.md
Azure Key Vault is a cloud service that works as a secure secrets store. You can
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this tutorial requires Azure PowerShell module version 5.0.0 or later. Type `Get-Module az -ListAvailable` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-template.md
[Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets, such as keys, passwords, certificates, and other secrets. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create a key vault and a secret. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
kubernetes-fleet L4 Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/l4-load-balancing.md
You can follow this document to set up layer 4 load balancing for such multi-clu
## Prerequisites * Read the [conceptual overview of this feature](./concepts-l4-load-balancing.md), which provides an explanation of `ServiceExport` and `MultiClusterService` objects referenced in this document.
kubernetes-fleet Quickstart Access Fleet Kubernetes Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-access-fleet-kubernetes-api.md
If your Azure Kubernetes Fleet Manager resource was created with the hub cluster
## Prerequisites * You need a Fleet resource with a hub cluster and member clusters. If you don't have one, see [Create an Azure Kubernetes Fleet Manager resource and join member clusters using Azure CLI](quickstart-create-fleet-and-members.md). * The identity (user or service principal) you're using needs to have the Microsoft.ContainerService/fleets/listCredentials/action on the Fleet resource.
kubernetes-fleet Quickstart Create Fleet And Members Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members-portal.md
Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure porta
## Prerequisites * Read the [conceptual overview of this feature](./concepts-fleet.md), which provides an explanation of fleets and member clusters referenced in this document. * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
kubernetes-fleet Quickstart Create Fleet And Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-create-fleet-and-members.md
Get started with Azure Kubernetes Fleet Manager (Fleet) by using the Azure CLI t
## Prerequisites * Read the [conceptual overview of this feature](./concepts-fleet.md), which provides an explanation of fleets and member clusters referenced in this document. * Read the [conceptual overview of fleet types](./concepts-choosing-fleet.md), which provides a comparison of different fleet configuration options.
kubernetes-fleet Quickstart Resource Propagation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-resource-propagation.md
In this quickstart, you learn how to propagate resources from an Azure Kubernete
## Prerequisites * Read the [resource propagation conceptual overview](./concepts-resource-propagation.md) to understand the concepts and terminology used in this quickstart. * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
kubernetes-fleet Upgrade Hub Cluster Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/upgrade-hub-cluster-type.md
For more information, see [Choosing an Azure Kubernetes Fleet Manager option][co
## Prerequisites and limitations - [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to the latest version. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - You must have an existing Kubernetes Fleet resource without a hub cluster. The steps in this article show you how to create a Kubernetes Fleet resource without a hub cluster. If you already have one, you can skip the initial setup and begin at [Upgrade hub cluster type for the Kubernetes Fleet resource](#upgrade-hub-cluster-type-for-the-kubernetes-fleet-resource).
kubernetes-fleet Use Taints Tolerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/use-taints-tolerations.md
Taints and tolerations work together to ensure member clusters only receive spec
## Prerequisites
-* [!INCLUDE [free trial note](../../includes/quickstarts-free-trial-note.md)]
+* [!INCLUDE [free trial note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
* Read the conceptual overviews for [taints](./concepts-fleet.md#taints) and [tolerations](./concepts-resource-propagation.md#tolerations). * You must have a Fleet resource with a hub cluster and member clusters. If you don't have this resource, follow [Quickstart: Create a Fleet resource and join member clusters](quickstart-create-fleet-and-members.md). * You must gain access to the Kubernetes API of the hub cluster by following the steps in [Access the Kubernetes API of the Fleet resource](./quickstart-access-fleet-kubernetes-api.md).
lab-services How To Create Lab Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-bicep.md
In this article, you learn how to create a lab using a Bicep file. For a detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). ## Prerequisites
lab-services How To Create Lab Plan Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-bicep.md
In this article, you learn how to create a lab plan using a Bicep file or Azure Resource Manager (ARM) template. For a detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). ## Prerequisites
lab-services Tutorial Create Lab With Advanced Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md
In this tutorial, you learn how to:
## Create a resource group The following steps show how to use the Azure portal to [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md). For simplicity, you create all resources for this tutorial in the same resource group.
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
Previously updated : 02/03/2023 Last updated : 06/26/2024 # Backend pool management
-The backend pool is a critical component of the load balancer. The backend pool defines the group of resources that will serve traffic for a given load-balancing rule.
+The backend pool is a critical component of the load balancer. The backend pool defines the group of resources that serve traffic for a given load-balancing rule.
There are two ways of configuring a backend pool:
There are two ways of configuring a backend pool:
* IP address
-To preallocate a backend pool with an IP address range that later will contain virtual machines and Virtual Machine Scale Sets, configure the pool by IP address and virtual network ID.
+To preallocate a backend pool with an IP address range that will contain virtual machines and Virtual Machine Scale Sets, configure the pool by IP address and virtual network ID.
This article focuses on configuration of backend pools by IP addresses. ## Configure backend pool by IP address and virtual network
az vm create \
* The backend resources must be in the same virtual network as the load balancer for IP based LBs * A load balancer with IP based Backend Pool canΓÇÖt function as a Private Link service * [Private endpoint resources](../private-link/private-endpoint-overview.md) can't be placed in an IP based backend pool
- * ACI containers aren't currently supported by IP based LBs
+ * IP-based load balancers don't support ACI containers
* Load balancers or services such as Application Gateway canΓÇÖt be placed in the backend pool of the load balancer * Inbound NAT Rules canΓÇÖt be specified by IP address * You can configure IP based and NIC based backend pools for the same load balancer. You canΓÇÖt create a single backend pool that mixes backed addresses targeted by NIC and IP addresses within the same pool.
load-balancer Quickstart Basic Internal Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-cli.md
Get started with Azure Load Balancer by using the Azure CLI to create an internal load balancer and two virtual machines. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
Create a virtual network by using [az network vnet create](/cli/azure/network/vn
In this example, you'll create an Azure Bastion host. The Azure Bastion host is used later in this article to securely manage the virtual machines and test the load balancer deployment. > [!IMPORTANT]-
-> [!INCLUDE [Pricing](../../../includes/bastion-pricing.md)]
-
+>
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
> ### Create a bastion public IP address
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create).
It can take a few minutes for the VMs to deploy. ## Add virtual machines to the backend pool
load-balancer Quickstart Basic Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-portal.md
An Azure Bastion host is created to securely manage the virtual machines and ins
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
These VMs are added to the backend pool of the load balancer that was created ea
| Availability set | Select the existing **myAvailabiltySet** | | Network security group | Select the existing **myNSG** | ## Create test virtual machine
load-balancer Quickstart Basic Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-powershell.md
Create an Azure Bastion host to securely manage the virtual machines in the back
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
Id Name PSJobTypeName State HasMoreData Location
3 Long Running O… AzureLongRunni… Completed True localhost New-AzVM ``` ## Create the test virtual machine
load-balancer Quickstart Basic Public Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-cli.md
Get started with Azure Load Balancer by using the Azure portal to create a basic public load balancer and two virtual machines. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
In this section, you'll create the resources for Azure Bastion. Azure Bastion is
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
It may take a few minutes for the VMs to deploy. You can continue to the next steps while the VMs are creating. ### Add virtual machines to load balancer backend pool
load-balancer Quickstart Basic Public Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-portal.md
In this section, you'll create a virtual network and subnet.
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
The two VMs will be added to an availability set named **myAvailabilitySet**.
| Availability set | Select **myAvailabilitySet** | | Network security group | Select the existing **myNSG** | ## Install IIS
load-balancer Quickstart Basic Public Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-powershell.md
Create an Azure Bastion host to securely manage the virtual machines in the back
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
Id Name PSJobTypeName State HasMoreData Location
Ensure the **State** of the VM creation is **Completed** before moving on to the next steps. ## Install IIS
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-cli.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application with
To deploy a dual stack (IPV4 + IPv6) application using Standard Load Balancer, see [Deploy an IPv6 dual stack application with Standard Load Balancer using Azure CLI](../virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-cli.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-powershell.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application with
To deploy a dual stack (IPV4 + IPv6) application using Standard Load Balancer, see [Deploy an IPv6 dual stack application with Standard Load Balancer using Azure PowerShell](../virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md). If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
load-balancer Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/components.md
The nature of the IP address determines the **type** of load balancer created. P
| **Description** | A public load balancer maps the public IP and port of incoming traffic to the private IP and port of the VM. Load balancer maps traffic the other way around for the response traffic from the VM. You can distribute specific types of traffic across multiple VMs or services by applying load-balancing rules. For example, you can spread the load of web request traffic across multiple web servers.| An internal load balancer distributes traffic to resources that are inside a virtual network. Azure restricts access to the frontend IP addresses of a virtual network that are load balanced. Frontend IP addresses and virtual networks are never directly exposed to an internet endpoint, meaning an internal load balancer can't accept incoming traffic from the internet. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources. | | **SKUs supported** | Basic, Standard | Basic, Standard |
-![Tiered load balancer example](./media/load-balancer-overview/load-balancer.png)
Load balancer can have multiple frontend IPs. Learn more about [multiple frontends](load-balancer-multivip-overview.md).
load-balancer Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/concepts.md
Previously updated : 05/08/2023 Last updated : 06/26/2024
load-balancer Configure Vm Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-portal.md
az vmss create \
- An existing standard sku load balancer in the subscription where the Virtual Machine Scale Set will be deployed. - An Azure Virtual Network for the Virtual Machine Scale Set. ## Sign in to Azure CLI
load-balancer Create Custom Http Health Probe Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/create-custom-http-health-probe-howto.md
In this article, you learn to create a custom API for HTTP [health probes](load-
- Remote access to the virtual machine via SSH or Azure Bastion. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
> ## Configure API on virtual machine
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
Previously updated : 06/23/2023 Last updated : 06/26/2024
The frontend IP configuration of your cross-region load balancer is static and a
> [!NOTE] > The backend port of your load balancing rule on cross-region load balancer should match the frontend port of the load balancing rule/inbound nat rule on regional standard load balancer. + ### Regional redundancy Configure regional redundancy by seamlessly linking a cross-region load balancer to your existing regional load balancers.
It's important to note that floating IP configured on the Azure cross-region Loa
### Health Probes
-Azure cross-region Load Balancer utilizes the health of the backend regional load balancers when deciding where to distribute traffic to. Health checks by cross-region load balancer are done automatically every 5 seconds, given that a user has set up health probes on their regional load balancer.
+Azure cross-region Load Balancer utilizes the health of the backend regional load balancers when deciding where to distribute traffic to. Health checks by cross-region load balancer are done automatically every 5 seconds, given that health probes are set up on their regional load balancer.
## Build cross region solution on existing Azure Load Balancer
Cross-region load balancer routes the traffic to the appropriate regional load b
:::image type="content" source="./media/cross-region-overview/multiple-region-global-traffic.png" alt-text="Diagram of multiple region global traffic."::: #### Participating regions in Azure+ * Australia East * Australia Southeast * Central India
load-balancer Distribution Mode Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/distribution-mode-concepts.md
Previously updated : 05/30/2023 Last updated : 06/26/2024 #Customer intent: As a administrator, I want to learn about the different distribution modes of Azure Load Balancer so that I can configure the distribution mode for my application.
The five-tuple consists of:
* **Protocol type** The hash is used to route traffic to healthy backend instances within the backend pool. The algorithm provides stickiness only within a transport session. When the client starts a new session from the same source IP, the source port changes and causes the traffic to go to a different backend instance.
-In order to configure hash based distribution, you must select session persistence to be **None** in the Azure portal. This specifies that successive requests from the same client may be handled by any virtual machine.
+In order to configure hash based distribution, you must select session persistence to be **None** in the Azure portal. This specifies that successive requests from the same client can be handled by any virtual machine.
![Hash-based distribution](./media/load-balancer-overview/load-balancer-distribution.png)
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
description: Overview of gateway load balancer SKU for Azure Load Balancer.
Previously updated : 04/20/2023 Last updated : 06/26/2024
load-balancer Gateway Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-partners.md
description: Learn about partners offering their network appliances for use with
Previously updated : 05/22/2023 Last updated : 06/26/2024
load-balancer Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/inbound-nat-rules.md
description: Overview of what is inbound NAT rule, why to use inbound NAT rule,
Previously updated : 05/03/2023 Last updated : 06/26/2024 #Customer intent: As a administrator, I want to create an inbound NAT rule so that I can forward a port to a virtual machine in the backend pool of an Azure Load Balancer.
An inbound NAT rule is used to forward traffic from a load balancer frontend to
## Why use an inbound NAT rule?
-An inbound NAT rule is used for port forwarding. Port forwarding lets you connect to virtual machines by using the load balancer frontend IP address and port number. The load balancer receives the traffic on a port, and based on the inbound NAT rule, forwards the traffic to a designated virtual machine on a specific backend port. Note, unlike load balancing rules, inbound NAT rules do not need a health probe attached to it.
+An inbound NAT rule is used for port forwarding. Port forwarding lets you connect to virtual machines by using the load balancer frontend IP address and port number. The load balancer receives the traffic on a port, and based on the inbound NAT rule, forwards the traffic to a designated virtual machine on a specific backend port. Note, unlike load balancing rules, inbound NAT rules don't need a health probe attached to it.
## Types of inbound NAT rules
load-balancer Instance Metadata Service Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/instance-metadata-service-load-balancer.md
Previously updated : 05/04/2023 Last updated : 06/26/2024
load-balancer Ipv6 Add To Existing Vnet Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-powershell.md
This article shows you how to add IPv6 connectivity to an existing IPv
- VMs with NICs that have both an IPv4 + IPv6 configuration - IPv6 Public IP so the load balancer has Internet-facing IPv6 connectivity If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
load-balancer Ipv6 Dual Stack Standard Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-dual-stack-standard-internal-load-balancer-powershell.md
Title: Deploy an IPv6 dual stack application using Standard Internal Load Balancer in Azure - PowerShell- description: This article shows how to deploy an IPv6 dual stack application with Standard Internal Load Balancer in Azure virtual network using Azure PowerShell. - Previously updated : 06/27/2023 Last updated : 06/27/2024
The changes that make the above an internal load balancer frontend configuration
- The `-PublicIpAddress` argument has been either omitted or replaced with `-PrivateIpAddress`. Note that the private address must be in the range of the Subnet IP space in which the internal load balancer will be deployed. If a static `-PrivateIpAddress` is omitted, the next free IPv6 address will be selected from the subnet in which the internal load Balancer is deployed. - The dual stack subnet in which the internal load balancer will be deployed is specified with either a `-Subnet` or `-SubnetId` argument. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
load-balancer Load Balancer Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-common-deployment-errors.md
Previously updated : 04/20/2023 Last updated : 06/26/2024
load-balancer Load Balancer Distribution Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-distribution-mode.md
The following options are available:
# [**PowerShell**](#tab/azure-powershell) Use PowerShell to change the load-balancer distribution settings on an existing load-balancing rule. The following command updates the distribution mode:
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
In order to function, you configure the Guest OS for the virtual machine to rece
* configuring the host firewall to allow traffic on the frontend IP port. > [!NOTE]
-> The examples below all use IPv4; to use IPv6, substitute "ipv6" for "ipv4". Also note that Floating IP for IPv6 does not work for Internal Load Balancers.
+> The examples below all use IPv4; to use IPv6, substitute "ipv6" for "ipv4".
### Windows Server
load-balancer Load Balancer Ha Ports Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ha-ports-overview.md
description: Learn about high availability ports load balancing on an internal l
Previously updated : 05/03/2023 Last updated : 06/26/2024
load-balancer Load Balancer Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-insights.md
Previously updated : 05/08/2023 Last updated : 06/26/2024
load-balancer Load Balancer Ipv6 For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-for-linux.md
keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot Previously updated : 04/21/2023 Last updated : 06/21/2024 # Configure DHCPv6 for Linux VMs
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- Some of the Linux virtual-machine images in the Azure Marketplace don't have Dynamic Host Configuration Protocol version 6 (DHCPv6) configured by default. To support IPv6, DHCPv6 must be configured in the Linux OS distribution that you're using. The various Linux distributions configure DHCPv6 in various ways because they use different packages. > [!NOTE]
This document describes how to enable DHCPv6 so that your Linux virtual machine
> [!WARNING] > By improperly editing network configuration files, you can lose network access to your VM. We recommended that you test your configuration changes on non-production systems. The instructions in this article have been tested on the latest versions of the Linux images in the Azure Marketplace. For more detailed instructions, consult the documentation for your own version of Linux.
-# [RHEL/CentOS/Oracle](#tab/redhat)
+# [RHEL/Oracle](#tab/redhat)
-For RHEL, CentOS, and Oracle Linux versions 7.4 or higher, follow these steps:
+For RHEL and Oracle Linux versions 7.4 or higher, follow these steps:
1. Edit the */etc/sysconfig/network* file, and add the following parameter:
load-balancer Load Balancer Ipv6 Internet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-cli.md
keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot Previously updated : 05/30/2023 Last updated : 06/26/2024
load-balancer Load Balancer Ipv6 Internet Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-ps.md
keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot Previously updated : 05/30/2023 Last updated : 06/26/2024
See [IPv6 for Azure VNET PowerShell Deployment](./virtual-network-ipv4-ipv6-dual
An Azure load balancer is a Layer-4 (TCP, UDP) load balancer. The load balancer provides high availability by distributing incoming traffic among healthy service instances in cloud services or virtual machines in a load balancer set. Azure Load Balancer can also present those services on multiple ports, multiple IP addresses, or both. ## Example deployment scenario
load-balancer Load Balancer Ipv6 Internet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-internet-template.md
keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot Previously updated : 05/03/2023 Last updated : 06/26/2024
load-balancer Load Balancer Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-overview.md
keywords: ipv6, azure load balancer, dual stack, public ip, native ipv6, mobile, iot Previously updated : 05/03/2023 Last updated : 06/26/2024
The following picture illustrates the IPv6 functionality for Azure Load Balancer
![Azure Load Balancer with IPv6](./media/load-balancer-ipv6-overview/load-balancer-ipv6.png)
-Once deployed, an IPv4 or IPv6-enabled Internet client can communicate with the public IPv4 or IPv6 addresses (or hostnames) of the Azure Internet-facing Load Balancer. The load balancer routes the IPv6 packets to the private IPv6 addresses of the VMs using network address translation (NAT). The IPv6 Internet client cannot communicate directly with the IPv6 address of the VMs.
+Once deployed, an IPv4 or IPv6-enabled Internet client can communicate with the public IPv4 or IPv6 addresses (or hostnames) of the Azure Internet-facing Load Balancer. The load balancer routes the IPv6 packets to the private IPv6 addresses of the VMs using network address translation (NAT). The IPv6 Internet client can't communicate directly with the IPv6 address of the VMs.
## Features
Details
Limitations
-* You cannot add IPv6 load balancing rules in the Azure portal. The rules can only be created through the template, CLI, PowerShell.
+* You can't add IPv6 load balancing rules in the Azure portal. The rules can only be created through the template, CLI, PowerShell.
* A single IPv6 address can be assigned to a single network interface in each VM.
-* You cannot configure the reverse DNS lookup for your public IPv6 addresses.
-* The VMs with the IPv6 addresses cannot be members of an Azure Cloud Service. They can be connected to an Azure Virtual Network (VNet) and communicate with each other over their IPv4 addresses.
-* Private IPv6 addresses can be deployed on individual VMs in a resource group but cannot be deployed into a resource group via Scale Sets.
-* Azure VMs cannot connect over IPv6 to other VMs, other Azure services, or on-premises devices. They can only communicate with the Azure load balancer over IPv6. However, they can communicate with these other resources using IPv4.
-* Network Security Group (NSG) protection for IPv4 is supported in dual-stack (IPv4+IPv6) deployments. NSGs do not apply to the IPv6 endpoints.
-* The IPv6 endpoint on the VM is not exposed directly to the internet. It is behind a load balancer. Only the ports specified in the load balancer rules are accessible over IPv6.
+* You can't configure the reverse DNS lookup for your public IPv6 addresses.
+* The VMs with the IPv6 addresses can't be members of an Azure Cloud Service. They can be connected to an Azure Virtual Network (VNet) and communicate with each other over their IPv4 addresses.
+* Private IPv6 addresses can be deployed on individual VMs in a resource group but can't be deployed into a resource group via Scale Sets.
+* Azure VMs can't connect over IPv6 to other VMs, other Azure services, or on-premises devices. They can only communicate with the Azure load balancer over IPv6. However, they can communicate with these other resources using IPv4.
+* Network Security Group (NSG) protection for IPv4 is supported in dual-stack (IPv4+IPv6) deployments. NSGs don't apply to the IPv6 endpoints.
+* The IPv6 endpoint on the VM isn't exposed directly to the internet. It is behind a load balancer. Only the ports specified in the load balancer rules are accessible over IPv6.
* Changing the loadDistributionMethod parameter for IPv6 is **currently not supported**. * IPv6 for Basic Load Balancer is locked to a **Dynamic** SKU. IPv6 for a Standard Load Balancer is locked to a **Static** SKU.
-* NAT64 (translation of IPv6 to IPv4) is not supported.
+* NAT64 (translation of IPv6 to IPv4) isn't supported.
* Attaching a secondary NIC that refers to an IPv6 subnet to a backend pool is **not supported** for Basic Load Balancer. ## Next steps
load-balancer Load Balancer Monitor Metrics Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-monitor-metrics-cli.md
Previously updated : 06/27/2023 Last updated : 06/27/2024
When you use CLI, Load Balancer metrics may use a different metric name for the
Here's a table of common Load Balancer metrics, the CLI metric name, and recommend aggregation values for queries:
-|Metric|CLI metric name|Recommended aggregation|
+|**Metric**|**CLI metric name**|**Recommended aggregation**|
|--|--|--| |Data path availability |VipAvailability |Average | |Health probe status |DipAvailability |Average |
load-balancer Load Balancer Multiple Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-powershell.md
This article describes how to use Azure Load Balancer with multiple IP addresses
## Steps to load balance on multiple IP configurations Follow the steps below to achieve the scenario outlined in this article:
load-balancer Load Balancer Multiple Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip.md
In this tutorial, you learn how to:
> [!IMPORTANT]
- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+ > [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
In this section, you create two virtual machines to host the IIS websites.
| Availability zone | **2** | | Network security group | Select the existing **myNSG** | ## Create secondary network configurations
load-balancer Load Balancer Nat Pool Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-nat-pool-migration.md
Previously updated : 05/01/2023 Last updated : 06/26/2024
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-outbound-connections.md
Previously updated : 03/06/2023 Last updated : 06/26/2024
A public IP assigned to a VM is a 1:1 relationship (rather than 1: many) and imp
:::image type="content" source="./media/load-balancer-outbound-connections/default-outbound-access.png" alt-text="Diagram of default outbound access.":::
-In Azure, virtual machines created in a virtual network without explicit outbound connectivity defined are assigned a default outbound public IP address. This IP address enables outbound connectivity from the resources to the Internet. This access is referred to as [default outbound access](../virtual-network/ip-services/default-outbound-access.md). This method of access is **not recommended** as it is insecure and the IP addresses are subject to change.
+In Azure, virtual machines created in a virtual network without explicit outbound connectivity defined are assigned a default outbound public IP address. This IP address enables outbound connectivity from the resources to the Internet. This access is referred to as [default outbound access](../virtual-network/ip-services/default-outbound-access.md). This method of access is **not recommended** as it's insecure and the IP addresses are subject to change.
>[!Important] >On September 30, 2025, default outbound access for new deployments will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). It is recommended to use one the explict forms of connectivity as shown in options 1-3 above.
If using SNAT without outbound rules via a public load balancer, SNAT ports are
## <a name="preallocatedports"></a> Default port allocation table
-When load balancing rules are selected to use default port allocation, or outbound rules are configured with "Use the default number of outbound ports", SNAT ports are allocated by default based on the backend pool size. Backends will receive the number of ports defined by the table, per frontend IP, up to a maximum of 1024 ports.
+When load balancing rules are selected to use default port allocation, or outbound rules are configured with "Use the default number of outbound ports", SNAT ports are allocated by default based on the backend pool size. Backends receive the number of ports defined by the table, per frontend IP, up to a maximum of 1024 ports.
-As an example, with 100 VMs in a backend pool and only one frontend IP, each VM will receive 512 ports. If a second frontend IP is added, each VM will receive an additional 512 ports. This means each VM is allocated a total of 1024 ports. As a result, adding a third frontend IP will NOT increase the number of allocated SNAT ports beyond 1024 ports.
+As an example, with 100 VMs in a backend pool and only one frontend IP, each VM receives 512 ports. If a second frontend IP is added, each VM receives an extra 512 ports. This means each VM is allocated a total of 1,024 ports. As a result, adding a third frontend IP will NOT increase the number of allocated SNAT ports beyond 1024 ports.
-As a rule of thumb, the number of SNAT ports provided when default port allocation is leveraged can be computed as: MIN(# of default SNAT ports provided based on pool size * number of frontend IPs associated with the pool, 1024)
+As a rule of thumb, the number of SNAT ports provided when default port allocation is applied can be computed as: MIN(# of default SNAT ports provided based on pool size * number of frontend IPs associated with the pool, 1024)
The following <a name="snatporttable"></a>table shows the SNAT port preallocations for a single frontend IP, depending on the backend pool size:
For more information about connection pooling with Azure App Service, see [Troub
New outbound connections to a destination IP fail when port exhaustion occurs. Connections succeed when a port becomes available. This exhaustion occurs when the 64,000 ports from an IP address are spread thin across many backend instances. For guidance on mitigation of SNAT port exhaustion, see the [troubleshooting guide](./troubleshoot-outbound-connection.md). ### Port reuse
-For TCP connections, the load balancer uses a single SNAT port for every destination IP and port. For connections to the same destination IP, a single SNAT port can be reused as long as the destination port differs. Reuse is not possible when there already exists a connection to the same destination IP and port.
+For TCP connections, the load balancer uses a single SNAT port for every destination IP and port. For connections to the same destination IP, a single SNAT port can be reused as long as the destination port differs. Reuse isn't possible when there already exists a connection to the same destination IP and port.
For UDP connections, the load balancer uses a **port-restricted cone NAT** algorithm, which consumes one SNAT port per destination IP, regardless of the destination port.
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
A **[public load balancer](./components.md#frontend-ip-configurations)** can pro
An **[internal (or private) load balancer](./components.md#frontend-ip-configurations)** is used in scenarios where private IPs are needed at the frontend only. Internal load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can be accessed from an on-premises network in a hybrid scenario. *Figure: Balancing multi-tier applications by using both public and internal Load Balancer*
load-balancer Load Balancer Query Metrics Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-query-metrics-rest-api.md
Title: Retrieve metrics with the Azure REST API description: In this article, get started using the Azure REST APIs to collect health and usage metrics for Azure Load Balancer.- Previously updated : 05/08/2023 Last updated : 06/26/2024
load-balancer Load Balancer Standard Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-diagnostics.md
description: Use the available metrics, alerts, and resource health information
Previously updated : 06/27/2023 Last updated : 06/27/2024
load-balancer Load Balancer Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-virtual-machine-scale-sets.md
Previously updated : 05/03/2023 Last updated : 06/26/2024
load-balancer Load Balancer Test Frontend Reachability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-test-frontend-reachability.md
Previously updated : 05/06/2023 Last updated : 06/26/2024
load-balancer Load Balancer Troubleshoot Health Probe Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot-health-probe-status.md
Previously updated : 05/31/2023 Last updated : 06/26/2024
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
Previously updated : 05/01/2023 Last updated : 06/26/2024
There are two types of inbound NAT rule:
In this article, you learn how to add and remove an inbound NAT rule for both types. You learn how to change the frontend port allocation in a multiple instance inbound NAT rule. You can choose from the Azure portal, PowerShell, or CLI examples. ## Prerequisites
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
Previously updated : 05/05/2023 Last updated : 06/26/2024
Health probes have the following properties:
| Name | Name of the health probe. This is a name you get to define for your health probe | | Protocol | Protocol of health probe. This is the protocol type you would like the health probe to leverage. Available options are: TCP, HTTP, HTTPS | | Port | Port of the health probe. The destination port you would like the health probe to use when it connects to the virtual machine to check the virtual machine's health status. You must ensure that the virtual machine is also listening on this port (that is, the port is open). |
-| Interval (seconds) | Interval of health probe. The amount of time (in seconds) between consecutive health check attemps to the virtual machine |
+| Interval (seconds) | Interval of health probe. The amount of time (in seconds) between consecutive health check attempts to the virtual machine |
| Used by | The list of load balancer rules using this specific health probe. You should have at least one rule using the health probe for it to be effective | | Path | The URI used for requesting health status from the virtual machine instance by the health probe (only applicable for HTTP(s) probes).
load-balancer Move Across Regions External Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-portal.md
description: Use an Azure Resource Manager template to move an external load bal
Previously updated : 06/27/2023 Last updated : 06/27/2024
In a literal sense, you can't move an Azure external load balancer from one regi
## Prepare and move The following procedures show how to prepare the external load balancer for the move by using a Resource Manager template and move the external load balancer configuration to the target region by using the Azure portal. You must first export the public IP configuration of external load balancer. ### Export the public IP template and deploy the public IP from the portal
load-balancer Move Across Regions External Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-powershell.md
description: Use Azure Resource Manager template to move Azure external Load Bal
Previously updated : 06/27/2023 Last updated : 06/27/2024
Azure external load balancers can't be moved from one region to another. You can
The following steps show how to prepare the external load balancer for the move using a Resource Manager template, and move the external load balancer configuration to the target region using Azure PowerShell. As part of this process, the public IP configuration of the external load balancer must be included and must me done first before moving the external load balancer. ### Export the public IP template and deploy from Azure PowerShell
load-balancer Move Across Regions Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-portal.md
description: Use Azure Resource Manager template to move Azure internal Load Bal
Previously updated : 06/27/2023 Last updated : 06/27/2024 # Move Azure internal Load Balancer to another region using the Azure portal
-There are various scenarios in which you'd want to move your existing internal load balancer from one region to another. For example, you may want to create an internal load balancer with the same configuration for testing. You may also want to move an internal load balancer to another region as part of disaster recovery planning.
+There are various scenarios in which you'd want to move your existing internal load balancer from one region to another. For example, you might want to create an internal load balancer with the same configuration for testing. You might also want to move an internal load balancer to another region as part of disaster recovery planning.
-Azure internal load balancers can't be moved from one region to another. You can however, use an Azure Resource Manager template to export the existing configuration and virtual network of an internal load balancer. You can then stage the resource in another region by exporting the load balancer and virtual network to a template, modifying the parameters to match the destination region, and then deploy the templates to the new region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
+Azure internal load balancers can't be moved from one region to another. You can however, use an Azure Resource Manager template to export the existing configuration and virtual network of an internal load balancer. You can then stage the resource in another region by exporting the load balancer and virtual network to a template, modifying the parameters to match the destination region, and then deploy the templates to the new region. For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
## Prerequisites - Make sure that the Azure internal load balancer is in the Azure region from which you want to move. -- Azure internal load balancers can't be moved between regions. You have to associate the new load balancer to resources in the target region.
+- Azure internal load balancers can't be moved between regions. You have to associate the new load balancer to resources in the target region.
- To export an internal load balancer configuration and deploy a template to create an internal load balancer in another region, you need the Network Contributor role or higher.
Azure internal load balancers can't be moved from one region to another. You can
- Verify that your Azure subscription allows you to create internal load balancers in the target region that's used. Contact support to enable the required quota. -- Make sure that your subscription has enough resources to support the addition of load balancers for this process. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits)
+- Make sure that your subscription has enough resources to support the addition of load balancers for this process. See [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits)
## Prepare and move
-The following steps show how to prepare the internal load balancer for the move using a Resource Manager template, and move the internal load balancer configuration to the target region using the Azure portal. As part of this process, the virtual network configuration of the internal load balancer must be included and must be done first before moving the internal load balancer.
+The following steps show how to prepare the internal load balancer for the move using a Resource Manager template, and move the internal load balancer configuration to the target region using the Azure portal. As part of this process, the virtual network configuration of the internal load balancer must be included and must be done first before moving the internal load balancer.
### Export the virtual network template and deploy from the Azure portal
The following steps show how to prepare the internal load balancer for the move
} } ```
-7. Change the source virtual network name value in the editor to a name of your choice for the target VNET. Ensure you enclose the name in quotes.
+7. Change the source virtual network name value in the editor to a name of your choice for the target virtual network. Ensure you enclose the name in quotes.
8. Select **Save** in the editor. 9. Select **TEMPLATE** > **Edit template** to open the **template.json** file in the online editor.
-10. To edit the target region where the VNET will be moved, change the **location** property under resources:
+10. To edit the target region where the virtual network will be moved, change the **location** property under resources:
```json "resources": [
The following steps show how to prepare the internal load balancer for the move
```
-11. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
+11. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
12. You can also change other parameters in the **template.json** file if you choose, and are optional depending on your requirements:
- * **Address Space** - The address space of the VNET can be altered before saving by modifying the **resources** > **addressSpace** section and changing the **addressPrefixes** property in the **template.json** file:
+ * **Address Space** - The address space of the virtual network can be altered before saving by modifying the **resources** > **addressSpace** section and changing the **addressPrefixes** property in the **template.json** file:
```json "resources": [
The following steps show how to prepare the internal load balancer for the move
] ```
- In the **template.json** file, to change the address prefix, it must be edited in two places, the section listed above and the **type** section listed below. Change the **addressPrefix** property to match the one above:
+ In the **template.json** file, to change the address prefix, it must be edited in two places, the section listed above and the **type** section listed below. Change the **addressPrefix** property to match the one above:
```json "type": "Microsoft.Network/virtualNetworks/subnets",
The following steps show how to prepare the internal load balancer for the move
13. Select **Save** in the online editor.
-14. Select **BASICS** > **Subscription** to choose the subscription where the target VNET will be deployed.
+14. Select **BASICS** > **Subscription** to choose the subscription where the target virtual network will be deployed.
-15. Select **BASICS** > **Resource group** to choose the resource group where the target VNET will be deployed. You can select **Create new** to create a new resource group for the target VNET. Ensure the name isn't the same as the source resource group of the existing VNET.
+15. Select **BASICS** > **Resource group** to choose the resource group where the target virtual network will be deployed. You can select **Create new** to create a new resource group for the target virtual network. Ensure the name isn't the same as the source resource group of the existing virtual network.
-16. Verify **BASICS** > **Location** is set to the target location where you wish for the VNET to be deployed.
+16. Verify **BASICS** > **Location** is set to the target location where you wish for the virtual network to be deployed.
17. Verify under **SETTINGS** that the name matches the name that you entered in the parameters editor above.
The following steps show how to prepare the internal load balancer for the move
1. Select to the [Azure portal](https://portal.azure.com) > **Resource Groups** in another browser tab or window. 2. Locate the target resource group that contains the moved virtual network from the steps above, and select it. 3. Select > **Settings** > **Properties**.
- 4. On the right side of the portal, highlight the **Resource ID** and copy it to the clipboard. Alternatively, you can select the **copy to clipboard** button to the right of the **Resource ID** path.
+ 4. On the right side of the portal, highlight the **Resource ID** and copy it to the clipboard. Alternatively, you can select the **copy to clipboard** button to the right of the **Resource ID** path.
5. Paste the resource ID into the **defaultValue** property into the **Edit Parameters** editor open in the other browser window or tab: ```json
The following steps show how to prepare the internal load balancer for the move
}, ```
-9. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
+9. To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces, **Central US** = **centralus**.
10. You can also change other parameters in the template if you choose, and are optional depending on your requirements:
The following steps show how to prepare the internal load balancer for the move
``` For more information on the differences between basic and standard sku load balancers, see [Azure Standard Load Balancer overview](./load-balancer-overview.md)
- * **Availability zone** - You can change the zone(s) of the load balancer's frontend by changing the **zone** property. If the zone property isn't specified, the frontend is created as no-zone. You can specify a single zone to create a zonal frontend or all 3 zones for a zone-redundant frontend.
+ * **Availability zone** - You can change the zones of the load balancer's frontend by changing the **zone** property. If the zone property isn't specified, the frontend is created as no-zone. You can specify a single zone to create a zonal frontend or all three zones for a zone-redundant frontend.
```json "frontendIPConfigurations": [
The following steps show how to prepare the internal load balancer for the move
13. Select **BASICS** > **Subscription** to choose the subscription where the target internal load balancer will be deployed.
-15. Select **BASICS** > **Resource group** to choose the resource group where the target load balancer will be deployed. You can select **Create new** to create a new resource group for the target internal load balancer or choose the existing resource group that was created above for the virtual network. Ensure the name isn't the same as the source resource group of the existing source internal load balancer.
+15. Select **BASICS** > **Resource group** to choose the resource group where the target load balancer will be deployed. You can select **Create new** to create a new resource group for the target internal load balancer or choose the existing resource group that was created previously for the virtual network. Ensure the name isn't the same as the source resource group of the existing source internal load balancer.
16. Verify **BASICS** > **Location** is set to the target location where you wish for the internal load balancer to be deployed.
-17. Verify under **SETTINGS** that the name matches the name that you entered in the parameters editor above. Verify the resource IDs are populated for any virtual networks in the configuration.
+17. Verify under **SETTINGS** that the name matches the name that you entered in the parameters editor previously. Verify the resource IDs are populated for any virtual networks in the configuration.
18. Check the box under **TERMS AND CONDITIONS**.
The following steps show how to prepare the internal load balancer for the move
## Discard
-If you wish to discard the target virtual network and internal load balancer, delete the resource group that contains the target virtual network and internal load balancer. To do so, select the resource group from your dashboard in the portal and select **Delete** at the top of the overview page.
+If you wish to discard the target virtual network and internal load balancer, delete the resource group that contains the target virtual network and internal load balancer. To do so, select the resource group from your dashboard in the portal and select **Delete** at the top of the overview page.
## Clean up
To commit the changes and complete the move of the virtual network and internal
## Next steps
-In this tutorial, you moved an Azure internal load balancer from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
+In this tutorial, you moved an Azure internal load balancer from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to:
- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
load-balancer Move Across Regions Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-powershell.md
description: Use Azure Resource Manager template to move Azure internal Load Bal
Previously updated : 06/27/2023 Last updated : 06/27/2024
Azure internal load balancers can't be moved from one region to another. You can
The following steps show how to prepare the internal load balancer for the move using a Resource Manager template, and move the internal load balancer configuration to the target region using Azure PowerShell. As part of this process, the virtual network configuration of the internal load balancer must be included and must be done first before moving the internal load balancer. ### Export the virtual network template and deploy from Azure PowerShell
load-balancer Quickstart Load Balancer Standard Internal Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-bicep.md
In this quickstart, you learn to use a BICEP file to create an internal; Azure l
:::image type="content" source="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png" alt-text="Diagram of resources deployed for internal load balancer." lightbox="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png"::: ## Prerequisites
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Get started with Azure Load Balancer by using the Azure CLI to create an interna
:::image type="content" source="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png" alt-text="Diagram of resources deployed for internal load balancer." lightbox="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png"::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
Create a virtual network by using [az network vnet create](/cli/azure/network/vn
In this example, you create an Azure Bastion host. The Azure Bastion host is used later in this article to securely manage the virtual machines and test the load balancer deployment. > [!IMPORTANT]
- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+ > [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
### Create a bastion public IP address
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create).
It can take a few minutes for the VMs to deploy. ## Add virtual machines to the backend pool
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
During the creation of the load balancer, you configure:
[!INCLUDE [load-balancer-create-2-virtual-machines](../../includes/load-balancer-create-2-virtual-machines.md)] ## Create test virtual machine
load-balancer Quickstart Load Balancer Standard Internal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md
$gwpublicip = New-AzPublicIpAddress @gwpublicip
* Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to associate the NAT gateway to the subnet of the virtual network > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
```azurepowershell-interactive
Id Name PSJobTypeName State HasMoreData Location
3 Long Running O… AzureLongRunni… Completed True localhost New-AzVM ``` ## Install IIS
load-balancer Quickstart Load Balancer Standard Internal Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-template.md
In this quickstart, you learn to use an Azure Resource Manager template (ARM tem
Using an ARM template takes fewer steps comparing to other deployment methods. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
load-balancer Quickstart Load Balancer Standard Public Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-bicep.md
In this quickstart, you learn to use a BICEP file to create a public Azure load
Using a Bicep file takes fewer steps comparing to other deployment methods. ## Prerequisites
Multiple Azure resources have been defined in the bicep file:
- [**Microsoft.Network/natGateways**](/azure/templates/microsoft.network/natgateways): for the NAT gateway. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
To find more Bicep files or ARM templates that are related to Azure Load Balancer, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Get started with Azure Load Balancer by using the Azure CLI to create a public l
:::image type="content" source="media/quickstart-load-balancer-standard-public-portal/public-load-balancer-resources.png" alt-text="Diagram of resources deployed for a standard public load balancer." lightbox="media/quickstart-load-balancer-standard-public-portal/public-load-balancer-resources.png"::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
Create a network security group rule using [az network nsg rule create](/cli/azu
In this section, you create the resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
### Create a public IP address
Create the virtual machines with [az vm create](/cli/azure/vm#az-vm-create):
It may take a few minutes for the VMs to deploy. You can continue to the next steps while the VMs are creating. ### Add virtual machines to load balancer backend pool
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
During the creation of the load balancer, you configure:
[!INCLUDE [load-balancer-create-2-virtual-machines](../../includes/load-balancer-create-2-virtual-machines.md)] ## Install IIS
load-balancer Quickstart Load Balancer Standard Public Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md
Use a NAT gateway to provide outbound internet access to resources in the backen
* Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to associate the NAT gateway to the subnet of the virtual network > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
```azurepowershell-interactive ## Create public IP address for NAT gateway ##
Id Name PSJobTypeName State HasMoreData Location
Ensure the **State** of the VM creation is **Completed** before moving on to the next steps. ## Install IIS
load-balancer Quickstart Load Balancer Standard Public Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-template.md
This quickstart shows you how to deploy a standard load balancer to load balance
Using an ARM template takes fewer steps comparing to other deployment methods. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
Multiple Azure resources have been defined in the template:
- [**Microsoft.Network/natGateways**](/azure/templates/microsoft.network/natgateways): for the NAT gateway. > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
To find more templates that are related to Azure Load Balancer, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
Previously updated : 07/10/2023 Last updated : 06/27/2024
>[!Important] >On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. For guidance on upgrading, visit [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md).
-Azure Load Balancer has three SKUs.
+Azure Load Balancer has three stock-keeping units (SKUs).
## <a name="skus"></a> SKU comparison
-Azure Load Balancer has 3 SKUs - Basic, Standard, and Gateway. Each SKU is catered towards a specific scenario and has differences in scale, features, and pricing.
+Azure Load Balancer has three stock-keeping units (SKUs) - Basic, Standard, and Gateway. Each SKU is catered towards a specific scenario and has differences in scale, features, and pricing.
To compare and understand the differences between Basic and Standard SKU, see the following table.
load-balancer Troubleshoot Load Balancer Imds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-load-balancer-imds.md
Previously updated : 05/22/2023 Last updated : 06/26/2024
This article describes common deployment errors and how to resolve those errors
## Error codes
-| Error code | Error message | Details and mitigation |
+| **Error code** | **Error message** | **Details and mitigation** |
| | - | -- |
-| 400 | Missing required parameter "\<ParameterName>". Please fix the request and retry. | The error code indicates a missing parameter. </br> For more information on adding the missing parameter, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).
-| 400 | Parameter value is not allowed, or parameter value "\<ParameterValue>" is not allowed for parameter "ParameterName". Please fix the request and retry. | The error code indicates that the request format is not configured properly. </br> Learn [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response) to fix the request body and issue a retry. |
-| 400 | Unexpected request. Please check the query parameters and retry. | The error code indicates that the request format is not configured properly. </br> Learn [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response) to fix the request body and issue a retry. |
-| 404 | No load balancer metadata is found. Please check if your VM is using any nonbasic SKU load balancer and retry later. | The error code indicates that your virtual machine isn't associated with a load balancer or the load balancer is basic SKU instead of standard. </br> For more information, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md?tabs=option-1-create-load-balancer-standard) to deploy a standard load balancer.|
-| 404 | API is not found: Path = "\<UrlPath>", Method = "\<Method>" | The error code indicates a misconfiguration of the path. </br> Learn [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response) to fix the request body and issue a retry. |
-| 405 | Http method is not allowed: Path = "\<UrlPath>", Method = "\<Method>" | The error code indicates an unsupported HTTP verb. </br> For more information, see [Azure Instance Metadata Service (IMDS)](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#http-verbs) for supported verbs. |
-| 429 | Too many requests | The error code indicates a rate limit. </br> For more information on rate limiting, see [Azure Instance Metadata Service (IMDS)](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#rate-limiting).|
-| 400 | Request body is larger than MaxBodyLength: … | The error code indicates a request larger than the MaxBodyLength. </br> For more information on body length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).|
-| 400 | Parameter key length is larger than MaxParameterKeyLength: … | The error code indicates a parameter key length larger than the MaxParameterKeyLength. </br> For more information on body length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response). |
-| 400 | Parameter value length is larger than MaxParameterValueLength: … | The error code indicates a parameter key length larger than the MaxParameterValueLength. </br> For more information on value length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).|
-| 400 | Parameter header value length is larger than MaxHeaderValueLength: … | The error code indicates a parameter header value length larger than the MaxHeaderValueLength. </br> For more information on value length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).|
-| 404 | Load Balancer metadata API is not available right now. Please retry later | The error code indicates the API could be provisioning. Try your request later. |
-| 404 | /metadata/loadbalancer is not currently available | The error code indicates the API is in the progress of enablement. Try your request later. |
-| 503 | Internal service unavailable. Please retry later | The error code indicates the API is temporarily unavailable. Try your request later. |
+| 400 | Missing required parameter "\<ParameterName>". Fix the request and retry. | The error code indicates a missing parameter.</br> For more information on adding the missing parameter, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response). |
+| 400 | Parameter value isn't allowed, or parameter value "\<ParameterValue>" isn't allowed for parameter "ParameterName". Fix the request and retry. | The error code indicates that the request format isn't configured properly.</br> Learn [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response) to fix the request body and issue a retry. |
+| 400 | Unexpected request. Check the query parameters and retry. | The error code indicates that the request format isn't configured properly.</br> Learn [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response) to fix the request body and issue a retry. |
+| 404 | No load balancer metadata is found. Check if your virtual machine is using any nonbasic SKU load balancer and retry later. | The error code indicates that your virtual machine isn't associated with a load balancer or the load balancer is basic SKU instead of standard.</br> For more information, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md?tabs=option-1-create-load-balancer-standard) to deploy a standard load balancer.|
+| 404 | API isn't found: Path = "\<UrlPath>", Method = "\<Method>" | The error code indicates a misconfiguration of the path.</br> Learn [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response) to fix the request body and issue a retry. |
+| 405 | Http method isn't allowed: Path = "\<UrlPath>", Method = "\<Method>" | The error code indicates an unsupported HTTP verb.</br> For more information, see [Azure Instance Metadata Service (IMDS)](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#http-verbs) for supported verbs. |
+| 429 | Too many requests | The error code indicates a rate limit.</br> For more information on rate limiting, see [Azure Instance Metadata Service (IMDS)](../virtual-machines/windows/instance-metadata-service.md?tabs=windows#rate-limiting).|
+| 400 | Request body is larger than MaxBodyLength: … | The error code indicates a request larger than the MaxBodyLength.</br> For more information on body length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).|
+| 400 | Parameter key length is larger than MaxParameterKeyLength: … | The error code indicates a parameter key length larger than the MaxParameterKeyLength.</br> For more information on body length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response). |
+| 400 | Parameter value length is larger than MaxParameterValueLength: … | The error code indicates a parameter key length larger than the MaxParameterValueLength.</br> For more information on value length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).|
+| 400 | Parameter header value length is larger than MaxHeaderValueLength: … | The error code indicates a parameter header value length larger than the MaxHeaderValueLength.</br> For more information on value length, see [How to retrieve load balancer metadata using the Azure Instance Metadata Service (IMDS)](howto-load-balancer-imds.md#sample-request-and-response).|
+| 404 | Load Balancer metadata API isn't available right now. Retry later | The error code indicates the API could be provisioning. Try your request later. |
+| 404 | /metadata/loadbalancer isn't currently available | The error code indicates the API is in the progress of enablement. Try your request later. |
+| 503 | Internal service unavailable. Retry later | The error code indicates the API is temporarily unavailable. Try your request later. |
| | | ## Next steps
load-balancer Tutorial Cross Region Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-cross-region-portal.md
Previously updated : 01/22/2024 Last updated : 06/27/2024 #Customer intent: As a administrator, I want to deploy a cross-region load balancer for global high availability of my application or service.
In this section, you create a
21. Select **Create** in the **Review + create** tab. > [!NOTE]
- > Cross region load-balancer can only be deployed in the following home regions: **East US 2, East US, East Europe, Southeast Asia, Central US, North Europe, East Asia**. For more information, see **https://aka.ms/homeregionforglb**.
+ > Cross region load-balancer deployment is listed to specific home Azure regions. For the current list, see [Home regions in Azure](cross-region-overview.md#home-regions-in-azure) for cross region load balancer.
## Test the load balancer
load-balancer Tutorial Deploy Cross Region Load Balancer Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-deploy-cross-region-load-balancer-template.md
A cross-region load balancer ensures a service is available globally across mult
Using an ARM template takes fewer steps comparing to other deployment methods. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
Multiple Azure resources have been defined in the template:
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
load-balancer Tutorial Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-cli.md
It can take a few minutes for the Azure Bastion host to deploy.
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
load-balancer Tutorial Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-powershell.md
A virtual network is needed for the resources that are in the backend pool of th
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
load-balancer Tutorial Load Balancer Ip Backend Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-ip-backend-portal.md
In this section, you'll create a virtual network for the load balancer, NAT gate
1. Select **Create**. > [!IMPORTANT]-
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
-
+>
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
> ## Create NAT gateway
load-balancer Tutorial Load Balancer Standard Public Zonal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-standard-public-zonal-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
[!INCLUDE [load-balancer-create-virtual-machine-zonal](../../includes/load-balancer-create-virtual-machine-zonal.md)] [!INCLUDE [load-balancer-install-iis](../../includes/load-balancer-install-iis.md)]
load-balancer Tutorial Protect Load Balancer Ddos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-protect-load-balancer-ddos.md
In this section, you'll create a virtual network, subnet, Azure Bastion host, an
> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
>
These VMs are added to the backend pool of the load balancer that was created ea
| Availability zone | **Zone 2** | | Network security group | Select the existing **myNSG** | ## Install IIS
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-cli.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application using Standard Load Balancer in Azure that includes a dual stack virtual network with a dual stack subnet, a Standard Load Balancer with dual (IPv4 + IPv6) frontend configurations, VMs with NICs that have a dual IP configuration, dual network security group rules, and dual public IPs. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-powershell.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application using Standard Load Balancer in Azure that includes a dual stack virtual network and subnet, a Standard Load Balancer with dual (IPv4 + IPv6) frontend configurations, VMs with NICs that have a dual IP configuration, network security group, and public IPs. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
logic-apps Add Run Csharp Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/add-run-csharp-scripts.md
Title: Add and run C# scripts in Standard workflows
description: Write and run C# scripts inline from Standard workflows to perform custom integration tasks using Inline Code operations in Azure Logic Apps. ms.suite: integration-+ Previously updated : 06/10/2024 Last updated : 06/26/2024 # Customer intent: As a logic app workflow developer, I want to write and run my own C# scripts so that I can perform custom integration tasks in Standard workflows for Azure Logic Apps.
Last updated 06/10/2024
> This capability is in preview and is subject to the > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-To perform custom integration tasks inline with your Standard workflow in Azure Logic Apps, you can directly add and run simple C# scripts from your workflow in the Azure portal. For this task, use the **Inline Code** action named **Execute CSharp Script Code**. This action returns the result from the script so you can use that output in your workflow's subsequent actions.
+To perform custom integration tasks inline with your Standard workflow in Azure Logic Apps, you can directly add and run C# scripts from within your workflow. For this task, use the **Inline Code** action named **Execute CSharp Script Code**. This action returns the results from your script so that you can use this output in your workflow's subsequent actions.
This capability provides the following benefits: -- Write your own scripts to solve more complex integration problems without having to use Azure Functions.
+- Write your own scripts within the workflow designer so that you can solve more complex integration challenges without having to use Azure Functions. No other service plans are necessary.
This benefit streamlines workflow development plus reduces the complexity and cost with managing more services. -- Deploy scripts alongside your workflows. No other service plans are necessary.
+- Generate a dedicated code file, which provides a personalized scripting space within your workflow.
+
+- Deploy scripts alongside your workflows.
This guide shows how to add the action in your workflow and add the C# script code that you want to run.
The following list describes some example scenarios where you can use a script h
- The Azure portal saves your script as a C# script file (.csx) in the same folder as your **workflow.json** file, which stores the JSON definition for your workflow, and deploys the file to your logic app resource along with the workflow definition. Azure Logic Apps compiles this file to make the script ready for execution.
- The .csx file format lets you write less "boilerplate" and focus just on writing a C# function. You can rename the .csx file for easier management during deployment. However, each time you rename the script, the new version overwrites the previous version.
+ The **.csx** file format lets you write less "boilerplate" and focus just on writing a C# function. You can rename the .csx file for easier management during deployment. However, each time you rename the script, the new version overwrites the previous version.
- The script is local to the workflow. To use the same script in other workflows, [view the script file in the **KuduPlus** console](#view-script-file), and then copy the script to reuse in other workflows.
The following list describes some example scenarios where you can use a script h
| Name | Limit | Notes | ||-|-| | Script run duration | 10 minutes | If you have scenarios that need longer durations, use the product feedback option to provide more information about your needs. |
-| Output size | 100 MB | Output size depends on the output size limit for actions, which is generally 100 MB.
+| Output size | 100 MB | Output size depends on the output size limit for actions, which is generally 100 MB. |
## Add the Execute CSharp Script Code action 1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource and workflow in the designer.
-1. In the designer, [follow these general steps to add the **Inline Code Operations** action named **Execute CSharp Script Code action** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+1. In the designer, [follow these general steps to add the **Inline Code Operations** action named **Execute CSharp Script Code** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
1. After the action information pane opens, on the **Parameters** tab, in the **Code File** box, update the prepopluated sample code with your own script code.
The following list describes some example scenarios where you can use a script h
var name = triggerOutputs?["body"]?["name"]?.ToString(); /// To get the outputs from a preceding action, you can uncomment and repurpose the following code.
- //var actionOutputs = (await context.GetActionResults("<action-name>").ConfigureAwait(false)).Outputs;
+ // var actionOutputs = (await context.GetActionResults("<action-name>").ConfigureAwait(false)).Outputs;
/// The following logs appear in the Application Insights traces table.
- //log.LogInformation("Outputting results.");
-
- /// var name = null;
+ // log.LogInformation("Outputting results.");
+ // var name = null;
return new Results {
The following list describes some example scenarios where you can use a script h
For more information, see ["#r" - Reference external assemblies](/azure/azure-functions/functions-reference-csharp?tabs=functionsv2%2Cfixed-delay%2Cazure-cli#referencing-external-assemblies).
-1. When you're done, save your workflow.
-
-<a name="view-script-file"></a>
-
-## View the script file
-
-1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource that has the workflow you want.
-
-1. On the logic app resource menu, under **Development Tools**, select **Advanced Tools**.
-
-1. On the **Advanced Tools** page, select **Go**, which opens the **KuduPlus** console.
-
-1. Open the **Debug console** menu, and select **CMD**.
-
-1. Go to your logic app's root location: **site/wwwroot**
+1. When you finish, save your workflow.
-1. Go to your workflow's folder, which contains the .csx file, along this path: **site/wwwroot/{workflow-name}**
-
-1. Next to the file name, select **Edit** to open and view the file.
+After you run your workflow, you can review the workflow output in Application Insights, if enabled. For more information, see [View logs in Application Insights](#view-logs-in-application-insights).
<a name="import-namespaces"></a>
using Microsoft.Extensions.Primitives;
using Microsoft.Extensions.Logging; using Microsoft.Azure.Workflows.Scripting; using Newtonsoft.Json.Linq;
-public static async Task<Results> Run(WorkflowContext context, ILogger log)
+
+public static async Task<Results> Run(WorkflowContext context)
+{
+ <...>
+}
+
+public class Results
+{
+ <...>
+}
``` The following list includes assemblies automatically added by the Azure Functions hosting environment:
System.Net.Http.Formatting
Newtonsoft.Json ```
+<a name="log-output-stream"></a>
+
+## Log output to a stream
+
+In your **`Run`** method, include a parameter with **`ILogger`** type and **`log`** as the name, for example:
+
+```csharp
+public static void Run(WorkflowContext context, ILogger log)
+{
+ log.LogInformation($"C# script successfully executed.");
+}
+```
+
+<a name="log-output-application-insights"></a>
+
+## Log output to Application Insights
+
+To create custom metrics in Application Insights, use the **`LogMetric`** extension method on **`ILogger`**.
+
+The following example shows a sample method call:
+
+`logger.LogMetric("TestMetric", 1234);`
+ <a name="access-trigger-action-outputs"></a> ## Access workflow trigger and action outputs in your script
To access data from your workflow, use the following methods available for the *
- **`GetTriggerResults`** method
- To access trigger outputs, use this method to return an object that represents the trigger and its outputs, which are available through the **`Outputs`** property. This object has **JObject** type, and you can use square brackets (**[]**) indexer to access various properties in the trigger outputs.
+ To access trigger outputs, use this method to return an object that represents the trigger and its outputs, which are available through the **`Outputs`** property. This object has **JObject** type, and you can use the square brackets (**[]**) as an indexer to access various properties in the trigger outputs.
- For example, the following sample code gets the data from the **`body`** property in the trigger outputs:
+ The following example gets the data from the **`body`** property in the trigger outputs:
```csharp public static async Task<Results> Run(WorkflowContext context, ILogger log) {+ var triggerOutputs = (await context.GetTriggerResults().ConfigureAwait(false)).Outputs; var body = triggerOutputs["body"];+
+ return new Results;
+
+ }
+
+ public class Results
+ {
+ <...>
} ``` - **`GetActionResults`** method
- To access action outputs, use this method to return an object that represents the action and its outputs, which are available through the **`Outputs`** property. This method accepts an action name as a parameter, for example:
+ To access action outputs, use this method to return an object that represents the action and its outputs, which are available through the **`Outputs`** property. This method accepts an action name as a parameter. The following example gets the data from the **`body`** property in the outputs from an action named *action-name*:
```csharp public static async Task<Results> Run(WorkflowContext context, ILogger log) {
- var actionOutputs = (await context.GetActionResults("actionName").ConfigureAwait(false)).Outputs;
+ var actionOutputs = (await context.GetActionResults("action-name").ConfigureAwait(false)).Outputs;
var body = actionOutputs["body"];+
+ return new Results;
+
+ }
+
+ public class Results
+ {
+ <...>
} ```
public static string GetEnvironmentVariable(string name)
## Return data to your workflow
-For this task, implement your **`Run`** method with a return type. If you want an asynchronous version, implement the **`Run`** method as a **`Task<>`**, and set the return value to the script action's outputs body, which any subsequent workflow actions can then reference.
+For this task, implement your **`Run`** method with a return type and **`return`** statement. If you want an asynchronous version, implement the **`Run`** method with a **`Task<return-type>`** attribute and the **`async`** keyword. The return value is set to the script action's outputs **`body`** property, which any subsequent workflow actions can then reference.
+
+The following example shows a **`Run`** method with a **`Task<Results>`** attribute, the **`async`** keyword, and a **`return`** statement:
```csharp
-public static void Run(WorkflowContext context, ILogger log)
+public static async Task<Results> Run(WorkflowContext context, ILogger log)
{ return new Results {
public class Results
} ``` --or-
+<a name="view-script-file"></a>
-```csharp
-public static async Task<Results> Run(WorkflowContext context, ILogger log)
-{
- return new Results
- {
- Message = !string.IsNullOrEmpty(name) ? $"Returning results with status message."
- };
-}
+## View the script file
-public class Results
-{
- public string Message {get; set;}
-}
-```
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource that has the workflow you want.
-<a name="log-output-stream"></a>
+1. On the logic app resource menu, under **Development Tools**, select **Advanced Tools**.
-## Log output to a stream
+1. On the **Advanced Tools** page, select **Go**, which opens the **KuduPlus** console.
-In your **`Run`** method, include a parameter with **`ILogger`** type and **`log`** as the name, for example:
+1. Open the **Debug console** menu, and select **CMD**.
-```csharp
-public static void Run(WorkflowContext context, ILogger log)
-{
- log.LogInformation($"C# script successfully executed.");
-}
-```
+1. Go to your logic app's root location: **site/wwwroot**
-<a name="log-output-application-insights"></a>
+1. Go to your workflow's folder, which contains the .csx file, along this path: **site/wwwroot/{workflow-name}**
-## Log output to Application Insights
+1. Next to the file name, select **Edit** to open and view the file.
-To create custom metrics in Application Insights, use the **`LogMetric`** extension method on **`ILogger`**.
+## View logs in Application Insights
-The following example shows a sample method call:
+1. In the [Azure portal](https://portal.azure.com), on the logic app resource menu, under **Settings**, select **Application Insights**, and then select your logic app.
-`logger.LogMetric("TestMetric", 1234);`
+1. On the **Application Insights** menu, under **Monitoring**, select **Logs**.
+
+1. Create a query to find any traces or errors from your workflow execution, for example:
+
+ ```text
+ union traces, errors
+ | project TIMESTAMP, message
+ ```
## Compilation errors
logic-apps Call From Power Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/call-from-power-apps.md
Last updated 01/10/2024
# Call logic app workflows from Power Apps To call your logic app workflow from a Power Apps flow, you can export your logic app resource and workflow as a custom connector. You can then call your workflow from a flow in a Power Apps environment.
logic-apps Concepts Schedule Automated Recurring Tasks Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md
So, no matter how far in the past you specify the start time, for example, 2017-
## Recurrence behavior
-Recurring built-in triggers, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md), run natively on the Azure Logic Apps runtime. These triggers differ from recurring connection-based managed connector triggers where you need to create a connection first, such as the Office 365 Outlook managed connector trigger.
+Recurring built-in triggers, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md), run directly and natively on the Azure Logic Apps runtime. These triggers differ from recurring connection-based managed connector triggers where you need to create a connection first, such as the Office 365 Outlook managed connector trigger.
-For both kinds of triggers, if a recurrence doesn't specify a specific start date and time, the first recurrence runs immediately when you save or deploy the logic app resource, despite your trigger's recurrence setup. To avoid this behavior, provide a start date and time for when you want the first recurrence to run.
+For both kinds of triggers, if a recurrence doesn't specify a start date and time, the first recurrence runs immediately when you save or deploy the logic app resource, despite your trigger's recurrence setup. To avoid this behavior, provide a start date and time for when you want the first recurrence to run.
### Recurrence for built-in triggers
-Recurring built-in triggers follow the schedule that you set, including any specified time zone. However, if a recurrence doesn't specify other advanced scheduling options, such as specific times to run future recurrences, those recurrences are based on the last trigger execution. As a result, the start times for those recurrences might drift due to factors such as latency during storage calls.
+Recurring built-in triggers follow the schedule that you set, including any specified time zone. However, if a recurrence doesn't specify other advanced scheduling options, such as specific times to run future recurrences, those recurrences are based on the last trigger execution. As a result, the start times for those recurrences might drift due to factors such as latency during storage calls. Advanced scheduling options, such as **At these hours** and **At these days** for the **Weekly** recurrence, are available and work only with built-in polling triggers, such as the **Recurrence** and **Sliding Window** triggers, which directly and natively run on the Azure Logic Apps runtime.
For more information, review the following documentation: * [Trigger recurrence for daylight saving time and standard time](#daylight-saving-standard-time) * [Troubleshoot recurrence issues](../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md#recurrence-issues)
-### Recurrence for connection-based triggers
+### Recurrence for managed triggers
-For recurring connection-based triggers, such as Office 365 Outlook, the schedule isn't the only driver that controls execution. The time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, and other factors that might cause run times to drift or produce unexpected behavior, for example:
+For recurring managed triggers, such as Office 365 Outlook, Outlook.com, and so on, the schedule isn't the only driver that controls execution. The time zone determines only the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, and other factors that might cause run times to drift or produce unexpected behavior, for example:
* Whether the trigger accesses a server that has more data, which the trigger immediately tries to fetch. * Any failures or retries that the trigger incurs.
For recurring connection-based triggers, such as Office 365 Outlook, the schedul
* Not maintaining the specified schedule when daylight saving time (DST) starts and ends. * Other factors that can affect when the next run time happens.
+Advanced scheduling options, such as **At these hours** and **At these days** for the **Weekly** recurrence, aren't available or supported for connectors that are Microsoft-managed, hosted, and run in Azure. These polling triggers calculate the next recurrence by using only the **Interval** and **Frequency** values.
+ For more information, review the following documentation: * [Trigger recurrence for daylight saving time and standard time](#daylight-saving-standard-time)
logic-apps Create Monitoring Tracking Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-monitoring-tracking-queries.md
Last updated 01/04/2024
# View and create queries for monitoring and tracking in Azure Monitor logs for Azure Logic Apps > [!NOTE] > This article applies only to Consumption logic apps. For information about monitoring Standard logic apps, review
logic-apps Create Serverless Apps Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-serverless-apps-visual-studio.md
Last updated 01/04/2024
# Create an example serverless app with Azure Logic Apps and Azure Functions in Visual Studio You can quickly create, build, and deploy cloud-based "serverless" apps by using the services and capabilities in Azure, such as Azure Logic Apps and Azure Functions. When you use Azure Logic Apps, you can quickly and easily build workflows using low-code or no-code approaches to simplify orchestrating combined tasks. You can integrate different services, cloud, on-premises, or hybrid, without coding those interactions, having to maintain glue code, or learn new APIs or specifications. When you use Azure Functions, you can speed up development by using an event-driven model. You can use triggers that respond to events by automatically running your own code. You can use bindings to seamlessly integrate other services.
logic-apps Export From Microsoft Flow Logic App Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-microsoft-flow-logic-app-template.md
Last updated 01/10/2024
# Export flows from Power Automate and deploy to Azure Logic Apps To extend and expand your flow's capabilities, you can migrate that flow from [Power Automate](https://make.powerautomate.com) to a Consumption logic app workflow that runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md). You can export your flow as an Azure Resource Manager template for a logic app, deploy that logic app template to an Azure resource group, and then open that logic app in the workflow designer.
logic-apps Handle Long Running Stored Procedures Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/handle-long-running-stored-procedures-sql-connector.md
Last updated 01/04/2024
# Handle stored procedure timeouts in the SQL connector for Azure Logic Apps When your logic app works with result sets so large that the [SQL connector](../connectors/connectors-create-api-sqlazure.md) doesn't return all the results at the same time, or if you want more control over the size and structure for your result sets, you can create a [stored procedure](/sql/relational-databases/stored-procedures/stored-procedures-database-engine) that organizes the results the way that you want. The SQL connector provides many backend features that you can access by using [Azure Logic Apps](../logic-apps/logic-apps-overview.md) so that you can more easily automate business tasks that work with SQL database tables.
logic-apps Logic Apps Author Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-author-definitions.md
Last updated 01/04/2024
# Create, edit, or extend JSON for logic app workflow definitions in Azure Logic Apps When you create enterprise integration solutions with automated workflows in [Azure Logic Apps](../logic-apps/logic-apps-overview.md), the underlying workflow definitions use simple and declarative JavaScript Object Notation (JSON) along with the [Workflow Definition Language (WDL) schema](../logic-apps/logic-apps-workflow-definition-language.md) for their description and validation. These formats make workflow definitions easier to read and understand without knowing much about code.
logic-apps Logic Apps Azure Resource Manager Templates Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-azure-resource-manager-templates-overview.md
Last updated 06/12/2024
# Overview: Automate deployment for Azure Logic Apps by using Azure Resource Manager templates When you're ready to automate creating and deploying your logic app, you can expand your logic app's underlying workflow definition into an [Azure Resource Manager template](../azure-resource-manager/management/overview.md). This template defines the infrastructure, resources, parameters, and other information for provisioning and deploying your logic app. By defining parameters for values that vary at deployment, also known as *parameterizing*, you can repeatedly and consistently deploy logic apps based on different deployment needs.
logic-apps Logic Apps Batch Process Send Receive Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-batch-process-send-receive-messages.md
Last updated 01/10/2024
# Send, receive, and batch process messages in Azure Logic Apps To send and process messages together in a specific way as groups, you can create a batching solution. This solution collects messages into a *batch* and waits until your specified criteria are met before releasing and processing the batched messages. Batching can reduce how often your logic app processes messages.
logic-apps Logic Apps Control Flow Run Steps Group Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-run-steps-group-scopes.md
Last updated 06/21/2024
# Run actions based on group status by using scopes in Azure Logic Apps To run actions only after another group of actions succeed or fail, group those actions inside a *scope*. This structure is useful when
logic-apps Logic Apps Control Flow Switch Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-switch-statement.md
Last updated 01/04/2024
# Create switch actions that run workflow actions based on specific values in Azure Logic Apps To run specific actions based on the values of objects, expressions, or tokens, add a *switch* action. This structure evaluates the object,
logic-apps Logic Apps Create Api App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-api-app.md
Last updated 10/23/2023
# Create custom APIs you can call from Azure Logic Apps Although Azure Logic Apps offers hundreds of connectors that you can use in logic app workflows, you might want to call APIs,
logic-apps Logic Apps Create Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-azure-resource-manager-templates.md
Last updated 01/04/2024
# Create Azure Resource Manager templates to automate Consumption logic app deployment for Azure Logic Apps To help you automatically create and deploy a Consumption logic app, this article describes the ways that you can create an [Azure Resource Manager template](../azure-resource-manager/management/overview.md). Azure Logic Apps also provides a [prebuilt logic app Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.logic/logic-app-create/azuredeploy.json) that you can reuse, not only to create Consumption logic apps, but also to define the resources and parameters for deployment. You can use this template for your own business scenarios or customize the template to meet your requirements. For an overview about the structure and syntax for a template that contains a workflow definition and other resources necessary for deployment, see [Overview: Automate deployment for logic apps with Azure Resource Manager templates](logic-apps-azure-resource-manager-templates-overview.md).
logic-apps Logic Apps Custom Api Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-custom-api-authentication.md
Your logic app resource uses this Microsoft Entra application identity to authen
#### [PowerShell](#tab/azure-powershell) You can perform this task through Azure Resource Manager with PowerShell. In PowerShell, run the following commands:
logic-apps Logic Apps Custom Api Host Deploy Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-custom-api-host-deploy-call.md
Last updated 01/04/2024
# Deploy and call custom APIs from workflows in Azure Logic Apps After you [create your own APIs](./logic-apps-create-api-app.md) to use in your logic app workflows, you need to deploy those APIs before you can call them. You can deploy your APIs as [web apps](../app-service/overview.md), but consider deploying your APIs as [API apps](../app-service/app-service-web-tutorial-rest-api.md), which make your job easier when you build, host, and consume APIs in the cloud and on premises. You don't have to change any code in your APIs - just deploy your code to an API app. You can host your APIs on [Azure App Service](../app-service/overview.md), a platform-as-a-service (PaaS) offering that provides highly scalable, easy API hosting.
logic-apps Logic Apps Deploy Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-deploy-azure-resource-manager-templates.md
ms.devlang: azurecli
# Deploy Azure Resource Manager templates for Azure Logic Apps After you create an Azure Resource Manager template for your Consumption logic app, you can deploy your template in these ways:
logic-apps Logic Apps Enterprise Integration Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-metadata.md
Last updated 01/04/2024
# Manage artifact metadata in integration accounts for Azure Logic Apps You can define custom metadata for artifacts in integration accounts and get that metadata during runtime for your logic app workflow to use. For example, you can provide metadata for artifacts, such as partners, agreements, schemas, and maps. All these artifact types store metadata as key-value pairs.
logic-apps Logic Apps Enterprise Integration Rosettanet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-rosettanet.md
Last updated 01/31/2024
# Exchange RosettaNet messages for B2B integration using workflows in Azure Logic Apps To send and receive RosettaNet messages in workflows that you create using Azure Logic Apps, you can use the RosettaNet connector, which provides actions that manage and support communication that follows RosettaNet standards. RosettaNet is a non-profit consortium that has established standard processes for sharing business information. These standards are commonly used for supply chain processes and are widespread in the semiconductor, electronics, and logistics industries. The RosettaNet consortium creates and maintains Partner Interface Processes (PIPs), which provide common business process definitions for all RosettaNet message exchanges. RosettaNet is based on XML and defines message guidelines, interfaces for business processes, and implementation frameworks for communication between companies. For more information, visit the [RosettaNet site](https://www.gs1us.org/resources/rosettanet).
logic-apps Logic Apps Exceed Default Page Size With Pagination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exceed-default-page-size-with-pagination.md
Last updated 01/04/2024
# Get more data, items, or records by using pagination in Azure Logic Apps When you retrieve data, items, or records by using a connector action in [Azure Logic Apps](../logic-apps/logic-apps-overview.md), you might get
logic-apps Logic Apps Handle Large Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-handle-large-messages.md
Last updated 01/04/2024
# Handle large messages in workflows using chunking in Azure Logic Apps Azure Logic Apps has different maximum limits on the message content size that triggers and actions can handle in logic app workflows, based on the logic app resource type and the environment where that logic app workflow runs. These limits help reduce any overhead that results from storing and processing [large messages](#what-is-large-message). For more information about message size limits, review [Message limits in Azure Logic Apps](logic-apps-limits-and-config.md#messages).
logic-apps Logic Apps Scenario Edi Send Batch Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-edi-send-batch-messages.md
Last updated 01/04/2024
# Exchange EDI messages as batches or groups between trading partners in Azure Logic Apps In business to business (B2B) scenarios, partners often exchange messages in groups or *batches*.
logic-apps Logic Apps Scenario Social Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-scenario-social-serverless.md
Last updated 01/04/2024
# Create a streaming customer insights dashboard with Azure Logic Apps and Azure Functions Azure offers [serverless](https://azure.microsoft.com/solutions/serverless/) tools that help you quickly build and host apps in the cloud, without having to think about infrastructure.
logic-apps Manage Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/manage-logic-apps-with-visual-studio.md
Last updated 01/04/2024
# Manage logic apps with Visual Studio Although you can create, edit, manage, and deploy logic apps in the [Azure portal](https://portal.azure.com), you can also use Visual Studio when you want to add your logic apps to source control, publish different versions, and create [Azure Resource Manager](../azure-resource-manager/management/overview.md) templates for various deployment environments. With Visual Studio Cloud Explorer, you can find and manage your logic apps along with other Azure resources. For example, you can open, download, edit, run, view run history, disable, and enable logic apps that are already deployed in the Azure portal. If you're new to working with Azure Logic Apps in Visual Studio, learn [how to create logic apps with Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md).
logic-apps Monitor B2b Messages Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-b2b-messages-log-analytics.md
Last updated 01/04/2024
# Set up Azure Monitor logs and collect diagnostics data for B2B messages in Azure Logic Apps > [!NOTE] > This article applies only to Consumption logic apps. For information about monitoring Standard logic apps, review
To set up logging for your integration account, [install the Logic Apps B2B solu
This article shows how to enable Azure Monitor logging for your integration account. ## Prerequisites
logic-apps Quickstart Create Deploy Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-azure-resource-manager-template.md
Last updated 01/04/2024
# Quickstart: Create and deploy a Consumption logic app workflow in multi-tenant Azure Logic Apps with an ARM template [Azure Logic Apps](logic-apps-overview.md) is a cloud service that helps you create and run automated workflows that integrate data, apps, cloud-based services, and on-premises systems by choosing from [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This quickstart focuses on the process for deploying an Azure Resource Manager template (ARM template) to create a basic [Consumption logic app workflow](logic-apps-overview.md#resource-environment-differences) that checks the status for Azure on an hourly schedule and runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md#resource-environment-differences). If your environment meets the prerequisites, and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
logic-apps Quickstart Create Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-bicep.md
Last updated 01/04/2024
# Quickstart: Create and deploy a Consumption logic app workflow in multi-tenant Azure Logic Apps with Bicep [Azure Logic Apps](logic-apps-overview.md) is a cloud service that helps you create and run automated workflows that integrate data, apps, cloud-based services, and on-premises systems by choosing from [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This quickstart focuses on the process for deploying a Bicep file to create a basic [Consumption logic app workflow](logic-apps-overview.md#resource-environment-differences) that checks the status for Azure on an hourly schedule and runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md#resource-environment-differences). ## Prerequisites
logic-apps Quickstart Create Example Consumption Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-example-consumption-workflow.md
Last updated 06/13/2024
# Quickstart: Create an example Consumption logic app workflow using the Azure portal To create an automated workflow that performs tasks with multiple cloud services, this quickstart shows how to create an example logic app workflow that integrates the following services, an RSS feed for a website and an email account.
logic-apps Quickstart Create Logic Apps Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-logic-apps-visual-studio-code.md
Last updated 01/04/2024
# Quickstart: Create and manage logic app workflow definitions with multitenant Azure Logic Apps and Visual Studio Code This quickstart shows how to create and manage logic app workflows that help you automate tasks and processes that integrate apps, data, systems, and services across organizations and enterprises by using multitenant [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and Visual Studio Code. You can create and edit the underlying workflow definitions, which use JavaScript Object Notation (JSON), for logic apps through a code-based experience. You can also work on existing logic apps that are already deployed to Azure. For more information about multitenant versus single-tenant model, review [Single-tenant versus multitenant and integration service environment](single-tenant-overview-compare.md).
logic-apps Quickstart Create Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-logic-apps-with-visual-studio.md
Last updated 01/04/2024
# Quickstart: Create automated integration workflows with multitenant Azure Logic Apps and Visual Studio This quickstart shows how to design, develop, and deploy automated workflows that integrate apps, data, systems, and services across enterprises and organizations by using multitenant [Azure Logic Apps](logic-apps-overview.md) and Visual Studio. Although you can perform these tasks in the Azure portal, Visual Studio lets you add your logic apps to source control, publish different versions, and create Azure Resource Manager templates for different deployment environments. For more information about multitenant versus single-tenant model, review [Single-tenant versus multitenant and integration service environment](single-tenant-overview-compare.md).
logic-apps Quickstart Logic Apps Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-cli.md
Last updated 01/04/2024
# Quickstart: Create and manage workflows with Azure CLI in Azure Logic Apps This quickstart shows how to create and manage automated workflows that run in Azure Logic Apps by using the [Azure CLI Logic Apps extension](/cli/azure/logic) (`az logic`). From the command line, you can create a [Consumption logic app](logic-apps-overview.md#resource-environment-differences) in multi-tenant Azure Logic Apps by using the JSON file for a logic app workflow definition. You can then manage your logic app by running operations such as `list`, `show` (`get`), `update`, and `delete` from the command line.
logic-apps Quickstart Logic Apps Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-powershell.md
Last updated 01/04/2024
# Quickstart: Create and manage workflows with Azure PowerShell in Azure Logic Apps This quickstart shows how to create and manage automated workflows that run in Azure Logic Apps by using [Azure PowerShell](/powershell/azure/install-azure-powershell). From PowerShell, you can create a [Consumption logic app](logic-apps-overview.md#resource-environment-differences) in multi-tenant Azure Logic Apps by using the JSON file for a logic app workflow definition. You can then manage your logic app by running the cmdlets in the [Az.LogicApp](/powershell/module/az.logicapp/) PowerShell module.
logic-apps Sample Logic Apps Cli Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/sample-logic-apps-cli-script.md
Last updated 01/04/2024
# Azure CLI script sample - create a logic app This script creates a sample logic app through the [Azure CLI Logic Apps extension](/cli/azure/logic), (`az logic`). For a detailed guide to creating and managing logic apps through the Azure CLI, see the [Logic Apps quickstart for the Azure CLI](quickstart-logic-apps-azure-cli.md).
logic-apps Send Related Messages Sequential Convoy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/send-related-messages-sequential-convoy.md
Last updated 01/04/2024
# Send related messages in order by using a sequential convoy in Azure Logic Apps with Azure Service Bus When you need to send correlated messages in a specific order, you can follow the [*sequential convoy* pattern](/azure/architecture/patterns/sequential-convoy) when using [Azure Logic Apps](../logic-apps/logic-apps-overview.md) by using the [Azure Service Bus connector](../connectors/connectors-create-api-servicebus.md). Correlated messages have a property that defines the relationship between those messages, such as the ID for the [session](../service-bus-messaging/message-sessions.md) in Service Bus.
logic-apps Tutorial Build Schedule Recurring Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-build-schedule-recurring-logic-app-workflow.md
Last updated 06/11/2024
# Tutorial: Create schedule-based automated workflows using Azure Logic Apps This tutorial shows how to build an example [logic app workflow](../logic-apps/logic-apps-overview.md) that runs on a recurring schedule. Specifically, this example workflow checks the travel time, including the traffic, between two places and runs every weekday morning. If the time exceeds a specific limit, the workflow sends you an email that includes the travel time and the extra time necessary to arrive at your destination. The workflow includes various steps, which start with a schedule-based trigger followed by a Bing Maps action, a data operations action, a control flow action, and an email notification action.
logic-apps Tutorial Process Email Attachments Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-process-email-attachments-workflow.md
Last updated 04/16/2024
# Tutorial: Create workflows that process emails using Azure Logic Apps, Azure Functions, and Azure Storage Azure Logic Apps helps you automate workflows and integrate data across Azure services, Microsoft services, other software-as-a-service (SaaS) apps, and on-premises systems. This tutorial shows how to build a [logic app workflow](logic-apps-overview.md) that handles incoming emails and any attachments, analyzes the email content using Azure Functions, saves the content to Azure storage, and sends email for reviewing the content.
logic-apps Tutorial Process Mailing List Subscriptions Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-process-mailing-list-subscriptions-workflow.md
Last updated 01/04/2024
# Tutorial: Create automated approval-based workflows by using Azure Logic Apps This tutorial shows how to build an example [logic app workflow](../logic-apps/logic-apps-overview.md) that automates an approval-based tasks. Specifically, this example workflow app processes subscription requests for a mailing list that's managed by the [MailChimp](https://mailchimp.com/) service. This workflow includes various steps, which start by monitoring an email account for requests, sends these requests for approval, checks whether or not the request gets approval, adds approved members to the mailing list, and confirms whether or not new members get added to the list.
logic-apps Update Workflow Definition Language Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/update-workflow-definition-language-schema.md
Last updated 08/15/2023
# Update schema for Workflow Definition Language in Azure Logic Apps - June 1, 2016 The [latest Workflow Definition Language schema version June-01-2016](https://schema.management.azure.com/schemas/2016-06-01/Microsoft.Logic.json) and API version for Azure Logic Apps includes key improvements that make Consumption logic app workflows more reliable and easier to use:
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
Follow the steps in [Create resources you need to get started](quickstart-create
For more options, see [create a new compute instance](how-to-create-compute-instance.md?tabs=azure-studio#create).
-As an administrator, you can **[create a compute instance for others in the workspace](how-to-create-compute-instance.md#create-on-behalf-of)**.
+As an administrator, you can **[create a compute instance for others in the workspace](how-to-create-compute-instance.md#create-on-behalf-of)**. SSO has to be disabled for such a compute instance.
You can also **[use a setup script](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance.
machine-learning Dsvm Tutorial Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-bicep.md
This quickstart shows how to create an Ubuntu Data Science Virtual Machine using Bicep. A Data Science Virtual Machine (DSVM) is a cloud-based virtual machine, preloaded with a suite of data science and machine learning frameworks and tools. When deployed on GPU-powered compute resources, all tools and libraries are configured to use the GPU. ## Prerequisites
machine-learning Dsvm Tutorial Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
This quickstart shows how to create an Ubuntu Data Science Virtual Machine (DSVM) using an Azure Resource Manager template (ARM template). A Data Science Virtual Machines is a cloud-based resource, preloaded with a suite of data science and machine learning frameworks and tools. When deployed on GPU-powered compute resources, all tools and libraries are configured to use the GPU. If your environment meets the prerequisites and you know how to use ARM templates, select the **Deploy to Azure** button. This opens the template in the Azure portal.
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
In this article, you learn how to use your DSVM to both handle data science task
- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/services/machine-learning/) before you begin. - A provisioned DSVM on the Azure portal. For more information, visit the [Creating a virtual machine](https://portal.azure.com/#create/microsoft-dsvm.dsvm-windowsserver-2016) resource. ## Use Jupyter Notebooks
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Last updated 5/01/2023
+reviewer: msakande
- devplatv2 - devx-track-azurecli
# Create jobs and input data for batch endpoints
-Batch endpoints can be used to perform long batch operations over large amounts of data. Such data can be placed in different places. Some type of batch endpoints can also receive literal parameters as inputs. In this tutorial we'll cover how you can specify those inputs, and the different types or locations supported.
+Batch endpoints can be used to perform long batch operations over large amounts of data. Such data can be placed in different places. Some types of batch endpoints can also receive literal parameters as inputs. This article covers how to specify those inputs.
-## Before invoking an endpoint
+<!-- , and the different types or locations supported. -->
+
+## Prerequisites
To successfully invoke a batch endpoint and create jobs, ensure you have the following:
-* You have permissions to run a batch endpoint deployment. **AzureML Data Scientist**, **Contributor**, and **Owner** roles can be used to run a deployment. For custom roles definitions read [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md) to know the specific permissions needed.
+* A batch endpoint and deployment. If you don't have one already, see [Deploy models for scoring in batch endpoints](how-to-use-batch-model-deployments.md) to create a deployment.
+
+* Permissions to run a batch endpoint deployment. **AzureML Data Scientist**, **Contributor**, and **Owner** roles can be used to run a deployment. For custom role definitions see [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md) to know the specific permissions needed.
-* You have a valid Microsoft Entra ID token representing a security principal to invoke the endpoint. This principal can be a user principal or a service principal. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. For testing purposes, you can use your own credentials for the invocation as mentioned below.
+* A valid Microsoft Entra ID token representing a security principal to invoke the endpoint. This principal can be a user principal or a service principal. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. You can use your own credentials for the invocation as follows:
# [Azure CLI](#tab/cli)
To successfully invoke a batch endpoint and create jobs, ensure you have the fol
- To learn more about how to authenticate with multiple type of credentials read [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md).
+ To learn more about how to start batch deployment jobs, using different types of credential, see [How to run jobs using different types of credentials](how-to-authenticate-batch-endpoint.md#how-to-run-jobs-using-different-types-of-credentials).
* The **compute cluster** where the endpoint is deployed has access to read the input data.
The following example shows how to change the location where an output named `sc
```
-## Next steps
+## Related content
* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md). * [Customize outputs in model deployments batch deployments](how-to-deploy-model-custom-output.md).
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
Following is a sample policy to default a shutdown schedule at 10 PM PST.
As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:
-* Studio, using the [Security settings](?tabs=azure-studio-preview#security-settings)
+* Studio, using the security settings in this article.
* [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance). For details on how to find the TenantID and ObjectID needed in this template, see [Find identity object IDs for authentication configuration](../healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md). You can also find these values in the Microsoft Entra admin center.
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
monikerRange: 'azureml-api-2 || azureml-api-1'
In Azure Machine Learning, you can export or delete your workspace data with either the portal graphical interface or the Python SDK. This article describes both options. ## Control your workspace data
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Not supported currently.
To delete one or more existing UAIs, you can put the UAI IDs which needs to be preserved under the section user_assigned_identities, the rest UAI IDs would be deleted.<br> To update identity type from SAI to UAI|SAI, you can change type from "user_assigned" to "system_assigned, user_assigned".
+### Add a user-assigned managed identity to a workspace in addition to a system-assigned identity
+
+In some scenarios, you may need to use a user-assigned managed identity in addition to the default system-assigned workspace identity. To add a user-assigned managed identity, without changing the existing workspace identity, use the following steps:
+
+1. [Create a user-assigned managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities). Save the ID for the managed identity that you create.
+1. To attach the managed identity to your workspace, you need a YAML file that specifies the identity. The following is an example of the YAML file contents. Replace the `<TENANT_ID>`, `<RESOURCE_GROUP>`, and `<USER_MANAGED_ID>` with your values.
+
+ ```yml
+ identity:
+ type: system_assigned,user_assigned
+ tenant_id: <TENANT_ID>
+ user_assigned_identities:
+ '/subscriptions/<SUBSCRIPTION_ID/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER_MANAGED_ID>':
+ {}
+ ```
+
+1. Use the Azure CLI `az ml workspace update` command to update your workspace. Specify the YAML file from the previous step using the `--file` parameter. The following example shows what this command looks like:
+
+ ```azurecli
+ az ml workspace update --resource-group <RESOURCE_GROUP> --name <WORKSPACE_NAME> --file <YAML_FILE_NAME>.yaml
+ ```
+ ### Compute cluster > [!NOTE]
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
Previously updated : 06/7/2023 Last updated : 06/25/2024
In this example, the early termination policy is applied at every interval start
### Truncation selection policy
-[Truncation selection](/python/api/azure-ai-ml/azure.ai.ml.sweep.truncationselectionpolicy) cancels a percentage of lowest performing jobs at each evaluation interval. jobs are compared using the primary metric.
+[Truncation selection](/python/api/azure-ai-ml/azure.ai.ml.sweep.truncationselectionpolicy) cancels a percentage of lowest performing jobs at each evaluation interval. Jobs are compared using the primary metric.
This policy takes the following configuration parameters:
In this example, the early termination policy is applied at every interval start
### No termination policy (default)
-If no policy is specified, the hyperparameter tuning service will let all training jobs execute to completion.
+If no policy is specified, the hyperparameter tuning service lets all training jobs execute to completion.
```Python sweep_job.early_termination = None
Control your resource budget by setting limits for your sweep job.
* `max_total_trials`: Maximum number of trial jobs. Must be an integer between 1 and 1000. * `max_concurrent_trials`: (optional) Maximum number of trial jobs that can run concurrently. If not specified, max_total_trials number of jobs launch in parallel. If specified, must be an integer between 1 and 1000.
-* `timeout`: Maximum time in seconds the entire sweep job is allowed to run. Once this limit is reached the system will cancel the sweep job, including all its trials.
-* `trial_timeout`: Maximum time in seconds each trial job is allowed to run. Once this limit is reached the system will cancel the trial.
+* `timeout`: Maximum time in seconds the entire sweep job is allowed to run. Once this limit is reached the system cancels the sweep job, including all its trials.
+* `trial_timeout`: Maximum time in seconds each trial job is allowed to run. Once this limit is reached the system cancels the trial.
>[!NOTE] >If both max_total_trials and timeout are specified, the hyperparameter tuning experiment terminates when the first of these two thresholds is reached.
Control your resource budget by setting limits for your sweep job.
sweep_job.set_limits(max_total_trials=20, max_concurrent_trials=4, timeout=1200) ```
-This code configures the hyperparameter tuning experiment to use a maximum of 20 total trial jobs, running four trial jobs at a time with a timeout of 1200 seconds for the entire sweep job.
+This code configures the hyperparameter tuning experiment to use a maximum of 20 total trial jobs, running four trial jobs at a time with a timeout of 1,200 seconds for the entire sweep job.
## Configure hyperparameter tuning experiment
sweep_job.early_termination = MedianStoppingPolicy(
) ```
-The `command_job` is called as a function so we can apply the parameter expressions to the sweep inputs. The `sweep` function is then configured with `trial`, `sampling-algorithm`, `objective`, `limits`, and `compute`. The above code snippet is taken from the sample notebook [Run hyperparameter sweep on a Command or CommandComponent](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb). In this sample, the `learning_rate` and `boosting` parameters will be tuned. Early stopping of jobs will be determined by a `MedianStoppingPolicy`, which stops a job whose primary metric value is worse than the median of the averages across all training jobs.(see [MedianStoppingPolicy class reference](/python/api/azure-ai-ml/azure.ai.ml.sweep.medianstoppingpolicy)).
+The `command_job` is called as a function so we can apply the parameter expressions to the sweep inputs. The `sweep` function is then configured with `trial`, `sampling-algorithm`, `objective`, `limits`, and `compute`. The above code snippet is taken from the sample notebook [Run hyperparameter sweep on a Command or CommandComponent](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/lightgbm/iris/lightgbm-iris-sweep.ipynb). In this sample, the `learning_rate` and `boosting` parameters are tuned. Early stopping of jobs are determined by a `MedianStoppingPolicy`, which stops a job whose primary metric value is worse than the median of the averages across all training jobs.(see [MedianStoppingPolicy class reference](/python/api/azure-ai-ml/azure.ai.ml.sweep.medianstoppingpolicy)).
To see how the parameter values are received, parsed, and passed to the training script to be tuned, refer to this [code sample](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/lightgbm/iris/src/main.py)
You can visualize all of your hyperparameter tuning jobs in the [Azure Machine L
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-metrics.png" alt-text="Hyperparameter tuning metrics chart"::: -- **Parallel Coordinates Chart**: This visualization shows the correlation between primary metric performance and individual hyperparameter values. The chart is interactive via movement of axes (click and drag by the axis label), and by highlighting values across a single axis (click and drag vertically along a single axis to highlight a range of desired values). The parallel coordinates chart includes an axis on the rightmost portion of the chart that plots the best metric value corresponding to the hyperparameters set for that job instance. This axis is provided in order to project the chart gradient legend onto the data in a more readable fashion.
+- **Parallel Coordinates Chart**: This visualization shows the correlation between primary metric performance and individual hyperparameter values. The chart is interactive via movement of axes (select and drag by the axis label), and by highlighting values across a single axis (select and drag vertically along a single axis to highlight a range of desired values). The parallel coordinates chart includes an axis on the rightmost portion of the chart that plots the best metric value corresponding to the hyperparameters set for that job instance. This axis is provided in order to project the chart gradient legend onto the data in a more readable fashion.
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-parallel-coordinates.png" alt-text="Hyperparameter tuning parallel coordinates chart":::
You can visualize all of your hyperparameter tuning jobs in the [Azure Machine L
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-2-dimensional-scatter.png" alt-text="Hyparameter tuning 2-dimensional scatter chart"::: -- **3-Dimensional Scatter Chart**: This visualization is the same as 2D but allows for three hyperparameter dimensions of correlation with the primary metric value. You can also click and drag to reorient the chart to view different correlations in 3D space.
+- **3-Dimensional Scatter Chart**: This visualization is the same as 2D but allows for three hyperparameter dimensions of correlation with the primary metric value. You can also select and drag to reorient the chart to view different correlations in 3D space.
:::image type="content" source="media/how-to-tune-hyperparameters/hyperparameter-tuning-3-dimensional-scatter.png" alt-text="Hyparameter tuning 3-dimensional scatter chart":::
machine-learning How To Use Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-retrieval-augmented-generation.md
Previously updated : 06/30/2023 Last updated : 06/26/2024
In your Azure Machine Learning workspace, you can enable prompt flow by turn-on
1. Select **Prompt flow** on the left menu. - 2. Select **Create**.
-3. In the **Create from gallery** section, select **View Detail** on the Bring your own data Q&A sample.
+3. In the **Explore gallery** menu, select **View Detail** on the _Q&A on Your Data_ sample.
:::image type="content" source="./media/how-to-use-retrieval-augmented-generation/view-detail.png" alt-text="Screenshot showing view details button on the prompt flow sample.":::
machine-learning Concept Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-connections.md
Previously updated : 06/30/2023 Last updated : 06/26/2024 # Connections in prompt flow
machine-learning Concept Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-session.md
Previously updated : 06/30/2023 Last updated : 06/26/2024 # Compute session in prompt flow
managed-ccf Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-cli.md
Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Azure Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md). Azure CLI is used to create and manage Azure resources using commands or scripts.
managed-ccf Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-go.md
Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying
In this quickstart, you learn how to create a Managed CCF resource using the Azure SDK for Go library. [API reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/confidentialledger/armconfidentialledger@v1.2.0-beta.1#section-documentation) | [Library source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/confidentialledger/armconfidentialledger) | [Package (Go)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/confidentialledger/armconfidentialledger@v1.2.0-beta.1)
managed-ccf Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-java.md
Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Azure Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md). [API reference documentation](/java/api/com.azure.resourcemanager.confidentialledger) | [Library source code](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/confidentialledger) | [Package (maven central repository)](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-confidentialledger/1.0.0-beta.3)
managed-ccf Quickstart Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-net.md
Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying
In this quickstart, you learn how to create a Managed CCF resource using the .NET client management library. [API reference documentation](/dotnet/api/overview/azure/resourcemanager.confidentialledger-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/confidentialledger/Azure.ResourceManager.ConfidentialLedger) | [Package (NuGet)](https://www.nuget.org/packages/Azure.ResourceManager.ConfidentialLedger/1.1.0-beta.2)
managed-ccf Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-portal.md
Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md). In this quickstart, you create a Managed CCF resource with the [Azure portal](https://portal.azure.com).
Then, re-register the `Microsoft.ConfidentialLedger` resource provider as descri
1. From the Azure portal menu, or from the Home page, select **Create a resource**.
-2. In the Search box, enter "Confidential Ledger", select said application, and then choose **Create**.
+1. In the Search box, enter "Confidential Ledger", select said application, and then choose **Create**.
1. On the Create confidential ledger section, provide the following information: - **Subscription**: Choose the desired subscription.
Then, re-register the `Microsoft.ConfidentialLedger` resource provider as descri
- **Application Type**: Choose Custom JavaScript Application. - **Network Node Count**: Choose the desired node count.
+ :::image type="content" source="media/quickstart-tutorials/create-mccf-resource.png" alt-text="A screenshot of the Managed CCF create screen.":::
1. Select the **Security** tab.
Then, re-register the `Microsoft.ConfidentialLedger` resource provider as descri
- **Member Group**: An optional group name. - **Certificate**: Paste the contents of the member0_cert.pem file.
+ :::image type="content" source="media/quickstart-tutorials/create-mccf-resource-security-tab.png" alt-text="A screenshot of the Managed CCF resource security tab screen.":::
1. Select **Review + Create**. After validation has passed, select **Create**.1.
managed-ccf Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-python.md
Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Azure Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md). [API reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-confidentialledger/latest/azure.confidentialledger.html) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger) | [Package (Python Package Index) Management Library](https://pypi.org/project/azure-mgmt-confidentialledger/)
managed-ccf Quickstart Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-typescript.md
Microsoft Azure Managed CCF (Managed CCF) is a new and highly secure service for deploying confidential applications. For more information on Azure Managed CCF, see [About Azure Managed Confidential Consortium Framework](overview.md). [API reference documentation](/javascript/api/overview/azure/confidential-ledger) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/confidentialledger/arm-confidentialledger) | [Package (npm)](https://www.npmjs.com/package/@azure/arm-confidentialledger)
managed-grafana How To Connect To Data Source Privately https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-to-data-source-privately.md
Managed private endpoints work with Azure services that support private link. Us
- Azure SQL server - Private link services - Azure Databricks-- Azure Database for PostgreSQL flexible servers
+- Azure Database for PostgreSQL flexible servers ([Only for servers that have public access networking](/azure/postgresql/flexible-server/concepts-networking-private-link))
## Prerequisites
mariadb Howto Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-cli.md
You can configure the [Azure Database for MariaDB audit logs](concepts-audit-logs.md) from the Azure CLI. ## Prerequisites
mariadb Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-privatelink-cli.md
A Private Endpoint is the fundamental building block for private link in Azure.
> [!NOTE] > The private link feature is only available for Azure Database for MariaDB servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. ## Prerequisites
mariadb Howto Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-server-parameters-using-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. ## List server configuration parameters for Azure Database for MariaDB server
mariadb Howto Manage Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-vnet-cli.md
Last updated 06/24/2022
Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MariaDB server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for MariaDB VNet service endpoints, including limitations, see [Azure Database for MariaDB Server VNet service endpoints](concepts-data-access-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MariaDB. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
VNets and Azure service resources can be in the same or different subscriptions.
## Sample script ### Run the script
VNets and Azure service resources can be in the same or different subscriptions.
## Clean up deployment ```azurecli echo "Cleaning up resources by removing the resource group..."
mariadb Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. > [!IMPORTANT] > The read replica feature is only available for Azure Database for MariaDB servers in the General
mariadb Howto Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-cli.md
The server restart will be blocked if the service is busy. For example, the serv
The time required to complete a restart depends on the MariaDB recovery process. To decrease the restart time, we recommend you minimize the amount of activity occurring on the server prior to the restart. ## Prerequisites
mariadb Howto Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. ## Restart the server
mariadb Howto Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-cli.md
Last updated 06/24/2022
Azure Database for MariaDB servers are backed up periodically to enable Restore features. Using this feature you may restore the server and all its databases to an earlier point-in-time, on a new server. ## Prerequisites
mariadb Howto Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. ## Set backup configuration
mariadb Quickstart Create Mariadb Server Database Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-arm-template.md
Last updated 06/24/2022
Azure Database for MariaDB is a managed service that you use to run, manage, and scale highly available MariaDB databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for MariaDB server in the Azure portal, PowerShell, or Azure CLI. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
mariadb Quickstart Create Mariadb Server Database Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-bicep.md
Last updated 06/24/2022
Azure Database for MariaDB is a managed service that you use to run, manage, and scale highly available MariaDB databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for MariaDB server in PowerShell or Azure CLI. ## Prerequisites
mariadb Quickstart Create Mariadb Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-powershell.md
If this is your first time using the Azure Database for MariaDB service, you mus
Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMariaDB ``` If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription ID using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
mariadb Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/sample-scripts-azure-cli.md
Keywords: azure cli samples, azure cli code samples, azure cli script samples
You can configure Azure SQL Database for MariaDB by using the <a href="/cli/azure">Azure CLI</a>. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
mariadb Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-change-server-configuration.md
Last updated 01/26/2022
This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for MariaDB server, and sets the *innodb_lock_wait_timeout* to a value that is other than the default one. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script lists all available configuration parameters as well as t
## Clean up resources ```azurecli az group delete --name $resourceGroup
mariadb Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-and-firewall-rule.md
Last updated 01/26/2022
This sample CLI script creates an Azure Database for MariaDB server and configures a server-level firewall rule. Once the script runs successfully, the MariaDB server is accessible by all Azure services and the configured IP address. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script creates an Azure Database for MariaDB server and configur
## Clean up resources ```azurecli az group delete --name $resourceGroup
mariadb Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-with-vnet-rule.md
Last updated 01/26/2022
This sample CLI script creates an Azure Database for MariaDB server and configures a vNet rule. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script creates an Azure Database for MariaDB server and configur
## Clean up resources ```azurecli az group delete --name $resourceGroup
mariadb Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-point-in-time-restore.md
Last updated 02/11/2022
This sample CLI script restores a single Azure Database for MariaDB server to a previous point in time. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
This sample CLI script restores a single Azure Database for MariaDB server to a
## Clean up resources ```azurecli az group delete --name $resourceGroup
mariadb Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-scale-server.md
Last updated 01/26/2022
This sample CLI script scales compute and storage for a single Azure Database for MariaDB server after querying the metrics. Compute can scale up or down. Storage can only scale up. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script scales compute and storage for a single Azure Database fo
## Clean up resources ```azurecli az group delete --name $resourceGroup
mariadb Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-server-logs.md
Last updated 01/26/2022
This sample CLI script enables and downloads the slow query logs of a single Azure Database for MariaDB server. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script enables and downloads the slow query logs of a single Azu
## Clean up resources ```azurecli az group delete --name $resourceGroup
mariadb Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-using-powershell.md
If this is your first time using the Azure Database for MariaDB service, you mus
Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMariaDB ``` If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription ID using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
ms. Previously updated : 10/16/2023 Last updated : 06/18/2024
Linux file system/guest storage | For the latest information, see the [Linux fil
Network/Storage | For the latest information, see the [network](../site-recovery/vmware-physical-azure-support-matrix.md#network) and [storage](../site-recovery/vmware-physical-azure-support-matrix.md#storage) prerequisites for Site Recovery. Azure Migrate and Modernize provides identical network/storage requirements. Azure requirements | For the latest information, see the [Azure network](../site-recovery/vmware-physical-azure-support-matrix.md#azure-vm-network-after-failover), [storage](../site-recovery/vmware-physical-azure-support-matrix.md#azure-storage), and [compute](../site-recovery/vmware-physical-azure-support-matrix.md#azure-compute) requirements for Site Recovery. Azure Migrate and Modernize has identical requirements for physical server migration. Mobility service | Install the Mobility service agent on each machine you want to migrate.
-UEFI boot | Supported. UEFI-based machines are migrated to Azure generation 2 VMs. <br/><br/> The OS disk should have up to four partitions, and volumes should be formatted with NTFS.
+UEFI boot | Supported. <br/><br/> Windows : NTFS <br/><br/> Linux: The following filesystem types are supported: ext4, xfs, btrfs. Some filesystems such as ZFS, UFS, ReiserFS, and DazukoFS may not be supported subject to additional command requirements to mount them.
UEFI - Secure boot | Not supported for migration. Target disk | Machines can be migrated only to managed disks (standard HDD, standard SSD, premium SSD) in Azure. Ultra disk | Ultra disk migration isn't supported from the Azure Migrate and Modernize portal. You have to do an out-of-band migration for the disks that are recommended as Ultra disks. That is, you can migrate selecting it as premium disk type and change it to Ultra disk after migration.
migrate Quickstart Create Migrate Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/quickstart-create-migrate-project.md
This quickstart describes how to set up an Azure Migrate project Recovery by usi
This template creates an Azure Migrate project that will be used further for assessing and migrating your Azure on-premises servers, infrastructure, applications, and data. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
migrate Troubleshoot Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-project.md
description: Helps you to troubleshoot issues with creating and managing Azure M
Previously updated : 02/18/2022- Last updated : 06/26/2024+ # Troubleshoot Azure Migrate projects
This article helps you troubleshoot issues when creating and managing [Azure Mig
## How to add new project?
-You can have multiple Azure Migrate projects in a subscription. [Learn how](./create-manage-projects.md) to create a project for the first time, or [add additional](create-manage-projects.md#create-additional-projects) projects.
+You can have multiple Azure Migrate projects in a subscription. Learn how to [create a project](./create-manage-projects.md) for the first time, or [add additional](create-manage-projects.md#create-additional-projects) projects.
## What Azure permissions are needed?
You need Contributor or Owner permissions in the subscription to create an Azure
## Can't find a project
-Finding an existing Azure Migrate project depends upon whether you're using the current or old version of Azure Migrate. [Follow](create-manage-projects.md#find-a-project).
-
+Finding an existing Azure Migrate project depends upon whether you're using the current or old version of Azure Migrate. [Follow these steps](create-manage-projects.md#find-a-project).
## Can't find a geography
Projects from the previous version of Azure Migrate can't be updated. You need t
If you try to create a project and encounter a deployment error: -- Try to create the project again in case it's a transient error. In **Deployments**, click on **Re-deploy** to try again.
+- Try to create the project again in case it's a transient error. In **Deployments**, select **Re-deploy** to try again.
- Check you have Contributor or Owner permissions in the subscription. - If you're deploying in a newly added geography, wait a short time and try again. - If you receive the error, "Requests must contain user identity headers", this might indicate that you don't have access to the Microsoft Entra tenant of the organization. In this case:
- - When you're added to a Microsoft Entra tenant for the first time, you receive an email invitation to join the tenant.
- - Accept the invitation to be added to the tenant.
- - If you can't see the email, contact a user with access to the tenant, and ask them to [resend the invitation](../active-directory/external-identities/add-users-administrator.md#resend-invitations-to-guest-users) to you.
- - After receiving the invitation email, open it and select the link to accept the invitation. Then, sign out of the Azure portal and sign in again. (refreshing the browser won't work.) You can then start creating the migration project.
+ - When you're added to a Microsoft Entra tenant for the first time, you receive an email invitation to join the tenant.
+ - Accept the invitation to be added to the tenant.
+ - If you can't see the email, contact a user with access to the tenant, and ask them to [resend the invitation](../active-directory/external-identities/add-users-administrator.md#resend-invitations-to-guest-users) to you.
+ - After receiving the invitation email, open it and select the link to accept the invitation. Then, sign out of the Azure portal and sign in again. (refreshing the browser won't work.) You can then start creating the migration project.
## How do I delete a project
-[Follow these instructions](create-manage-projects.md#delete-a-project) to delete a project. Note that when you delete a project, both the project and the metadata about discovered machines in the project are deleted.
+[Follow these instructions](create-manage-projects.md#delete-a-project) to delete a project. When you delete a project, both the project and the metadata about discovered machines in the project are deleted.
## Added tools don't show
-Make sure you have the right project selected. In the Azure Migrate hub > **Servers** or in **Databases**, click on **Change** next to **Migrate project (Change)** in the top-right corner of the screen. Choose the correct subscription and project name > **OK**. The page should refresh with the added tools of the selected project.
+Make sure you have the right project selected. In the Azure Migrate hub, select **Servers, databases and web apps**. Select your project and subscription from the **Project** drop-down list. The page refreshes with the selected project and added tools.
## Next steps
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
[!INCLUDE [azure-database-for-mysql-flexible-server-abstract](../includes/Azure-database-for-mysql-flexible-server-abstract.md)] ## Prerequisites
mysql Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-bicep.md
[!INCLUDE [azure-database-for-mysql-flexible-server-abstract](../includes/azure-database-for-mysql-flexible-server-abstract.md)] ## Prerequisites
mysql Sample Cli Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-audit-logs.md
This sample CLI script enables [audit logs](../concepts-audit-logs.md) on an Azu
## Sample script ### Run the script
This sample CLI script enables [audit logs](../concepts-audit-logs.md) on an Azu
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Change Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-change-server-parameters.md
This sample CLI script lists all available [server parameters](../concepts-serve
## Sample script ### Run the script
This sample CLI script lists all available [server parameters](../concepts-serve
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Create Connect Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-private-access.md
This sample CLI script creates an Azure Database for MySQL - Flexible Server in
## Sample script ### Run the script
Use the following steps to test connectivity to the MySQL server from the VM by
## Clean up resources ```azurecli az group delete --name $RESOURCE_GROUP
mysql Sample Cli Create Connect Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-public-access.md
Once the script runs successfully, the MySQL Flexible Server will be accessible
## Sample script ### Run the script
Once the script runs successfully, the MySQL Flexible Server will be accessible
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Monitor And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-monitor-and-scale.md
This sample CLI script scales compute, storage and IOPS for a single Azure Datab
## Sample script ### Run the script
This sample CLI script scales compute, storage and IOPS for a single Azure Datab
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-read-replicas.md
This sample CLI script creates and manages [read replicas](../concepts-read-repl
## Sample script ### Run the script
This sample CLI script creates and manages [read replicas](../concepts-read-repl
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Restart Stop Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restart-stop-start.md
Also, see [stop/start limitations](../concepts-limitations.md#stopstart-operatio
## Sample script ### Run the script
Also, see [stop/start limitations](../concepts-limitations.md#stopstart-operatio
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Restore Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restore-server.md
The new Flexible Server is created with the original server's configuration and
## Sample script ### Run the script
The new Flexible Server is created with the original server's configuration and
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Same Zone Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-same-zone-ha.md
Currently, Same-Zone high availability is supported only for the General purpose
## Sample script ### Run the script
Currently, Same-Zone high availability is supported only for the General purpose
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-slow-query-logs.md
This sample CLI script configures [slow query logs](../concepts-slow-query-logs.
## Sample script ### Run the script
This sample CLI script configures [slow query logs](../concepts-slow-query-logs.
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Cli Zone Redundant Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-zone-redundant-ha.md
Currently, Zone-Redundant high availability is supported only for the General pu
## Sample script ### Run the script
Currently, Zone-Redundant high availability is supported only for the General pu
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Tutorial Logic Apps With Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-logic-apps-with-mysql.md
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] This quickstart shows how to create an automated workflow using Azure Logic Apps with Azure Database for MySQL flexible server Connector (Preview).
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
Title: In-place automigration
description: This tutorial describes how to configure notifications, review migration details and FAQs for an Azure Database for MySQL Single Server instance schedule for in-place automigration to Flexible Server. -+ Last updated 05/21/2024
Here's the info you need to know post in-place migration:
- For Single Server instance with Query store enabled, the server parameter 'slow_query_log' on target instance is set to ON to ensure feature parity when migrating to Flexible Server. Note, for certain workloads this could affect performance and if you observe any performance degradation, set this server parameter to 'OFF' on the Flexible Server instance. - For Single Server instance with Microsoft Defender for Cloud enabled, the enablement state is migrated. To achieve parity in Flexible Server post automigration for properties you can configure in Single Server, consider the details in the following table:
-| Property | Configuration |
+| **Property** | **Configuration** |
| | |
-| properties.disabledAlerts | You can disable specific alert types by using the Microsoft Defender for Cloud platform. For more information, see the article [Suppress alerts from Microsoft Defender for Cloud guide](../../defender-for-cloud/alerts-suppression-rules.md). |
-| properties.emailAccountAdmins, properties.emailAddresses | You can centrally define email notification for Microsoft Defender for Cloud Alerts for all resources in a subscription. For more information, see the article [Configure email notifications for security alerts](../../defender-for-cloud/configure-email-notifications.md). |
-| properties.retentionDays, properties.storageAccountAccessKey, properties.storageEndpoint | The Microsoft Defender for Cloud platform exposes alerts through Azure Resource Graph. You can export alerts to a different store and manage retention separately. For more about continuous export, see the article [Set up continuous export in the Azure portal - Microsoft Defender for Cloud](../../defender-for-cloud/continuous-export.md?tabs=azure-portal). |
+| Suppress specific alert types | Disable specific alert types with the Microsoft Defender for Cloud platform. For more information, visit [Suppress alerts from Microsoft Defender for Cloud guide](../../defender-for-cloud/alerts-suppression-rules.md). <br /><br /> Single Server users can use the API property: <br /> `properties.disabledAlerts` |
+| Email notifications | Define email notification for Microsoft Defender for Cloud Alerts for all resources in a subscription. For more information, visit [Configure email notifications for security alerts](../../defender-for-cloud/configure-email-notifications.md). <br /><br /> Single Server users can use the API properties: <br /> `properties.emailAccountAdmins`, <br /> `properties.emailAddresses` |
+| Export alerts for further processing and/or archiving | Alerts are stored in the Microsoft Defender for Cloud platform and exposed through the Azure Resource Graph. <br /> You can export alerts to a different store and manage retention separately. For more information, visit [Set up continuous export in the Azure portal - Microsoft Defender for Cloud](../../defender-for-cloud/continuous-export.md). <br /><br /> Single Server users can use the API properties: <br /> `properties.retentionDays`, <br /> `properties.storageAccountAccessKey`, <br /> `properties.storageEndpoint` |
## Frequently Asked Questions (FAQs)
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/whats-happening-to-mysql-single-server.md
For more information on migrating from Single Server to Flexible Server using ot
> [!NOTE] > In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for select Single Server database workloads. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. If you own a Single Server workload with data storage used <= 100 GiB and no complex features (CMK, Microsoft Entra ID, Read Replica, Virtual Network, Double Infra encryption, Service endpoint/VNet Rules) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u). All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure Database for MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
-## Pre-requisite checks and post-migration actions when migration from Single to Flexible Server
+## Prerequisite checks and post-migration actions when migration from Single to Flexible Server
## What happens post sunset date (September 16, 2024)?
After the forced migration, you must reconfigure the features listed above on th
When you migrate from Azure Database for MySQL - Single Server to Flexible Server with Defender for Cloud enabled, the enablement state is preserved. To achieve parity in Flexible Server for properties you can configure in Single Server, consider the details in the following table.
-| Property | Configuration |
+| **Property** | **Configuration** |
| | |
-| properties.disabledAlerts | You can disable specific alert types by using the Microsoft Defender for Cloud platform. For more information, see the article [Suppress alerts from Microsoft Defender for Cloud guide](../../defender-for-cloud/alerts-suppression-rules.md). |
-| properties.emailAccountAdmins<br />properties.emailAddresses | You can centrally define email notification for Microsoft Defender for Cloud Alerts for all resources in a subscription. For more information, see the article [Configure email notifications for security alerts](../../defender-for-cloud/configure-email-notifications.md). |
-| properties.retentionDays<br />properties.storageAccountAccessKey<br />properties.storageEndpoint | The Microsoft Defender for Cloud platform exposes alerts through Azure Resource Graph. You can export alerts to a different store and manage retention separately. For more about continuous export, see the article [Set up continuous export in the Azure portal - Microsoft Defender for Cloud](../../defender-for-cloud/continuous-export.md). |
+| Suppress specific alert types | Disable specific alert types with the Microsoft Defender for Cloud platform. For more information, visit [Suppress alerts from Microsoft Defender for Cloud guide](../../defender-for-cloud/alerts-suppression-rules.md). <br /><br /> Single Server users can use the API property: <br /> `properties.disabledAlerts` |
+| Email notifications | Define email notification for Microsoft Defender for Cloud Alerts for all resources in a subscription. For more information, visit [Configure email notifications for security alerts](../../defender-for-cloud/configure-email-notifications.md). <br /><br /> Single Server users can use the API properties: <br /> `properties.emailAccountAdmins`, <br /> `properties.emailAddresses` |
+| Export alerts for further processing and/or archiving | Alerts are stored in the Microsoft Defender for Cloud platform and exposed through the Azure Resource Graph. <br /> You can export alerts to a different store and manage retention separately. For more information, visit [Set up continuous export in the Azure portal - Microsoft Defender for Cloud](../../defender-for-cloud/continuous-export.md). <br /><br /> Single Server users can use the API properties: <br /> `properties.retentionDays`, <br /> `properties.storageAccountAccessKey`, <br /> `properties.storageEndpoint` |
+
## Frequently Asked Questions (FAQs)
When you migrate from Azure Database for MySQL - Single Server to Flexible Serve
**Q. What happens to my existing Azure Database for MySQL single server instances?**
-**A.** Your existing Azure Database for MySQL single server workloads continue to function as before and will be officially supported until the sunset date. However, no new updates are released for Single Server and we strongly advise you to start migrating to Azure Database for MySQL Flexible Server at the earliest. Post the sunset date, your Single Server instance, along with its data files, will be [force-migrated](./whats-happening-to-mysql-single-server.md#forced-migration-post-sunset-date) to an appropriate Flexible Server instance in a phased manner.
+**A.** Your existing Azure Database for MySQL single server workloads continues to function as before and will be officially supported until the sunset date. However, no new updates are released for Single Server and we strongly advise you to start migrating to Azure Database for MySQL Flexible Server at the earliest. Post the sunset date, your Single Server instance, along with its data files, will be [force-migrated](./whats-happening-to-mysql-single-server.md#forced-migration-post-sunset-date) to an appropriate Flexible Server instance in a phased manner.
**Q. Can I choose to continue running Single Server beyond the sunset date?**
-**A.** Unfortunately, we don't plan to support Single Server beyond the sunset date of **September 16, 2024**, and hence we strongly advise that you start planning your migration as soon as possible. Post the sunset date, your Single Server instance, along with its data files, will be force-migrated to an appropriate Flexible Server instance in a phased manner. This might lead to limited feature availability as certain advanced functionality can't be force-migrated without customer inputs to the Flexible Server instance. Read more about steps to reconfigure such features post force-migration to minimize the potential impact [here](./whats-happening-to-mysql-single-server.md#action-required-post-forced-migration). If your server is in a region where Azure Database for MySQL - Flexible Server isn't supported, then post the sunset date, your Single Server instance will be available with limited operations to access data and to be able to migrate to Flexible Server.
+**A.** Unfortunately, we don't plan to support Single Server beyond the sunset date of **September 16, 2024**, and hence we strongly advise that you start planning your migration as soon as possible. Post the sunset date, your Single Server instance, along with its data files, will be force-migrated to an appropriate Flexible Server instance in a phased manner. This might lead to limited feature availability as certain advanced functionality can't be force-migrated without customer inputs to the Flexible Server instance. Read more about steps to reconfigure such features post force-migration to minimize the potential impact [here](./whats-happening-to-mysql-single-server.md#action-required-post-forced-migration). If your server is in a region where Azure Database for MySQL - Flexible Server isn't supported, then post the sunset date, your Single Server instance is available with limited operations to access data and to be able to migrate to Flexible Server.
**Q. My single server is deployed in a region that doesn't support flexible server. What will happen to my server post sunset date?**
-**A.** If your server is in a region where Azure Database for MySQL - Flexible Server isn't supported, then post the sunset date, your Single Server instance will be available with limited operations to access data and to be able to migrate to Flexible Server. We strongly recommend that you use one of the following options to migrate before the sunset date to avoid any disruptions in business continuity:
+**A.** If your server is in a region where Azure Database for MySQL - Flexible Server isn't supported, then post the sunset date, your Single Server instance is available with limited operations to access data and to be able to migrate to Flexible Server. We strongly recommend that you use one of the following options to migrate before the sunset date to avoid any disruptions in business continuity:
- Use Azure DMS to perform a cross-region migration to Flexible Server in a suitable Azure region. - Migrate to MySQL Server hosted on a VM in the region, if you're unable to change regions due to compliance issues. **Q. Post sunset date, will there be any data loss for my Single Server?**
-**A.** No, there won't be any data loss incurred for your Single Server instance. Post the sunset date, your Single Server instance, along with its data files, will be force-migrated to an appropriate Flexible Server instance. If your server is in a region where Azure Database for MySQL - Flexible Server isn't supported, then post the sunset date, your Single Server instance will be available with limited operations to access data and to be able to migrate to Flexible Server in an appropriate region.
+**A.** No, there won't be any data loss incurred for your Single Server instance. Post the sunset date, your Single Server instance, along with its data files, will be force-migrated to an appropriate Flexible Server instance. If your server is in a region where Azure Database for MySQL - Flexible Server isn't supported, then post the sunset date, your Single Server instance is available with limited operations to access data and to be able to migrate to Flexible Server in an appropriate region.
**Q. After the Single Server retirement announcement, what if I still need to create a new single server to meet my business needs?**
mysql Quickstart Create Mysql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-bicep.md
Last updated 12/01/2023
Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for MySQL server with virtual network integration. You can create the server in the Azure portal, Azure CLI, or Azure PowerShell. ## Prerequisites
mysql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-change-server-configuration.md
Last updated 02/10/2022
This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for MySQL server, and sets the *innodb_lock_wait_timeout* to a value that is other than the default one. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script lists all available configuration parameters as well as t
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-create-server-and-firewall-rule.md
Last updated 02/10/2022
This sample CLI script creates an Azure Database for MySQL server and configures a server-level firewall rule. Once the script runs successfully, the MySQL server is accessible by all Azure services and the configured IP address. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script creates an Azure Database for MySQL server and configures
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-point-in-time-restore.md
Last updated 02/10/2022
This sample CLI script restores a single Azure Database for MySQL server to a previous point in time. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script restores a single Azure Database for MySQL server to a pr
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-scale-server.md
Last updated 02/10/2022
This sample CLI script scales compute and storage for a single Azure Database for MySQL server after querying the metrics. Compute can scale up or down. Storage can only scale up. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script scales compute and storage for a single Azure Database fo
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-server-logs.md
Last updated 02/10/2022
This sample CLI script enables and downloads the slow query logs of a single Azure Database for MySQL server. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script enables and downloads the slow query logs of a single Azu
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-server-parameters-using-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. ## List server configuration parameters for Azure Database for MySQL server
mysql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-cli.md
Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for MySQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for MySQL VNet service endpoints, including limitations, see [Azure Database for MySQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for MySQL. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
VNets and Azure service resources can be in the same or different subscriptions.
## Sample script ### Run the script
VNets and Azure service resources can be in the same or different subscriptions.
## Clean up resources ```azurecli az group delete --name $resourceGroup
mysql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-read-replicas-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. > [!IMPORTANT] > The read replica feature is only available for Azure Database for MySQL servers in the General
mysql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. ## Restart the server
mysql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. ## Set backup configuration
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
Last updated 06/20/2022
Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for MySQL server with virtual network integration. You can create the server in the Azure portal, Azure CLI, or Azure PowerShell. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
mysql Quickstart Create Mysql Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli.md
This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create an Azure Database for MySQL server in five minutes. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
mysql Quickstart Create Mysql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-powershell.md
If this is your first time using the Azure Database for MySQL service, you must
Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL ``` If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription ID using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
mysql Tutorial Design Database Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-cli.md
Azure Database for MySQL is a relational database service in the Microsoft cloud
> * Update data > * Restore data [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
mysql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-powershell.md
If this is your first time using the Azure Database for MySQL service, you must
Register-AzResourceProvider -ProviderNamespace Microsoft.DBforMySQL ``` If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription ID using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
mysql Tutorial Provision Mysql Server Using Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-provision-mysql-server-using-azure-resource-manager-templates.md
If you are new to Azure Resource Manager templates and would like to try it, you
You may use the Azure Cloud Shell in the browser, or Install Azure CLI on your own computer to run the code blocks in this tutorial. ```azurecli-interactive az login
nat-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-overview.md
A NAT gateway doesn't affect the network bandwidth of your compute resources. Le
* The subnet has a [system default route](/azure/virtual-network/virtual-networks-udr-overview#default) that routes traffic with destination 0.0.0.0/0 to the internet automatically. Once NAT gateway is configured to the subnet, communication from the virtual machines existing in the subnet to the internet will prioritize using the public IP of the NAT gateway.
-* Presence of User Defined Routes (UDRs) for virtual appliances or a virtual network gateway (VPN Gateway and ExpressRoute) for a subnet's 0.0.0.0/0 traffic causes traffic to route to these services instead of NAT gateway.
+* When you create a user defined route (UDR) in your subnet route table for 0.0.0.0/0 traffic, the default internet path for this traffic is overridden. A UDR that sends 0.0.0.0/0 traffic to a virtual appliance or a virtual network gateway (VPN Gateway and ExpressRoute) as the next hop type instead override NAT gateway connectivity to the internet.
* Outbound connectivity follows this order of precedence among different routing and outbound connectivity methods:
nat-gateway Quickstart Create Nat Gateway Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/quickstart-create-nat-gateway-bicep.md
Get started with Azure NAT Gateway using Bicep. This Bicep file deploys a virtua
:::image type="content" source="./media/quickstart-create-nat-gateway-portal/nat-gateway-qs-resources.png" alt-text="Diagram of resources created in nat gateway quickstart." lightbox="./media/quickstart-create-nat-gateway-portal/nat-gateway-qs-resources.png"::: ## Prerequisites
nat-gateway Quickstart Create Nat Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/quickstart-create-nat-gateway-cli.md
In this quickstart, learn how to create a NAT gateway by using the Azure CLI. Th
:::image type="content" source="./media/quickstart-create-nat-gateway-portal/nat-gateway-qs-resources.png" alt-text="Diagram of resources created in nat gateway quickstart." lightbox="./media/quickstart-create-nat-gateway-portal/nat-gateway-qs-resources.png"::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
az network public-ip create \
Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create the bastion host. ```azurecli-interactive az network bastion create \
nat-gateway Quickstart Create Nat Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/quickstart-create-nat-gateway-portal.md
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
[!INCLUDE [virtual-network-create-with-nat-bastion.md](../../includes/virtual-network-create-with-nat-bastion.md)] ## Test NAT gateway
In this section, you test the NAT gateway. You first discover the public IP of t
20.7.200.36 ``` ## Next steps
nat-gateway Quickstart Create Nat Gateway Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/quickstart-create-nat-gateway-template.md
Get started with Azure NAT Gateway by using an Azure Resource Manager template (
:::image type="content" source="./media/quickstart-create-nat-gateway-portal/nat-gateway-qs-resources.png" alt-text="Diagram of resources created in nat gateway quickstart." lightbox="./media/quickstart-create-nat-gateway-portal/nat-gateway-qs-resources.png"::: If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
nat-gateway Tutorial Hub Spoke Nat Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-nat-firewall.md
The hub virtual network contains the firewall subnet that is associated with the
Azure Bastion uses your browser to connect to VMs in your virtual network over secure shell (SSH) or remote desktop protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Azure Bastion, see [Azure Bastion](/azure/bastion/bastion-overview) >[!NOTE]
- >[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+ >[!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
1. Enter or select the following information in **Azure Bastion**:
Obtain the NAT gateway public IP address for verification of the steps later in
1. Close the Bastion connection to **vm-spoke**. ## Next steps
nat-gateway Tutorial Hub Spoke Route Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-hub-spoke-route-nat.md
The hub virtual network is the central network of the solution. The hub network
Azure Bastion uses your browser to connect to VMs in your virtual network over secure shell (SSH) or remote desktop protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Azure Bastion, see [Azure Bastion](/azure/bastion/bastion-overview) >[!NOTE]
- >[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+ >[!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
1. Enter or select the following information in **Azure Bastion**:
Use Microsoft Edge to connect to the web server on **vm-spoke-1** you installed
1. Close the bastion connection to **vm-spoke-1**. ## Next steps
nat-gateway Tutorial Nat Gateway Load Balancer Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md
In this section, you test the NAT gateway. You first discover the public IP of t
1. Close the bastion connection to **vm-1**. ## Next steps
nat-gateway Tutorial Nat Gateway Load Balancer Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal.md
In this section, you test the NAT gateway. You first discover the public IP of t
1. Close the bastion connection to **vm-1**. ## Next steps
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
In this article, you deploy a virtual machine (VM), and then check communications to an IP address and URL. You determine the cause of a communication failure and how you can resolve it. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
network-watcher Diagnose Vm Network Routing Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-powershell.md
In this article, you deploy a virtual machine (VM), and then check communication
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this article requires the Az PowerShell module. For more information, see [How to install Azure PowerShell](/powershell/azure/install-azure-powershell). To find the installed version, run `Get-InstalledModule -Name Az`. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
network-watcher Quickstart Configure Network Security Group Flow Logs From Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-arm-template.md
In this quickstart, you learn how to enable NSG flow logs using an Azure Resource Manager (ARM) template and Azure PowerShell. For more information, see [What is Azure Resource Manager?](../azure-resource-manager/management/overview.md) and [NSG flow logs overview](nsg-flow-logs-overview.md). We start with an overview of the properties of the NSG flow log object. We provide sample templates. Then, we use a local Azure PowerShell instance to deploy the template.
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
In this quickstart, you learn how to enable [NSG flow logs](nsg-flow-logs-overview.md) using a Bicep file. ## Prerequisites
networking Check Usage Against Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/check-usage-against-limits.md
In this article, you learn how to see the number of each network resource type t
## PowerShell You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to log in to Azure.
networking Load Balancer Linux Cli Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/load-balancer-linux-cli-load-balance-multiple-websites-vm.md
This script sample creates a virtual network with two virtual machines (VM) that
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)] ## Sample script
networking Load Balancer Windows Powershell Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/load-balancer-windows-powershell-sample-nlb.md
This script sample creates everything needed to run several Windows virtual mach
If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/), and then run `Connect-AzAccount` to create a connection with Azure. ## Sample script [!code-powershell[main](../../../powershell_scripts/virtual-machine/create-vm-nlb/create-vm-nlb.ps1 "Quick Create VM")]
networking Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/traffic-manager-cli-websites-high-availability.md
This script creates a resource group, two app service plans, two web apps, a tra
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)] ## Sample script
networking Traffic Manager Powershell Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/traffic-manager-powershell-websites-high-availability.md
This script creates a resource group, two app service plans, two web apps, a tra
If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/), and then run `Connect-AzAccount` to create a connection with Azure. ## Sample script [!code-powershell[main](../../../powershell_scripts/traffic-manager/direct-traffic-for-increased-application-availability/direct-traffic-for-increased-application-availability.ps1 "Route traffic for high availability")]
networking Troubleshoot Failed State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/troubleshoot-failed-state.md
This article helps you understand the meaning of various provisioning states for Microsoft.Network resources. You can effectively troubleshoot situations when the state is **Failed**. ## Provisioning states
notification-hubs Create Notification Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-bicep.md
Azure Notification Hubs provides an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, Kindle, etc.) from any backend (cloud or on-premises). For more information about the service, see [What is Azure Notification Hubs](notification-hubs-push-notification-overview.md). This quickstart uses Bicep to create an Azure Notification Hubs namespace, and a notification hub named **MyHub** within that namespace.
notification-hubs Create Notification Hub Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-template.md
Azure Notification Hubs provides an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, Kindle, etc.) from any backend (cloud or on-premises). For more information about the service, see [What is Azure Notification Hubs](notification-hubs-push-notification-overview.md). This quickstart uses an Azure Resource Manager template to create an Azure Notification Hubs namespace, and a notification hub named **MyHub** within that namespace.
notification-hubs Create Notification Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/scripts/create-notification-hub-powershell.md
This sample PowerShell script creates a sample Azure notification hub. ## Prerequisites
object-anchors Get Started Hololens Directx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/get-started-hololens-directx.md
You'll learn how to:
> * Create and side-load a HoloLens application > * Detect an object and visualize its model ## Prerequisites
object-anchors Get Started Model Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/get-started-model-conversion.md
To complete this quickstart, make sure you have:
* <a href="https://git-scm.com" target="_blank">Git for Windows</a>. * The <a href="https://dotnet.microsoft.com/download/dotnet/6.0">.NET 6.0 SDK</a>. ## Create an Object Anchors account
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
Title: "Quickstart: JBoss EAP on Azure Red Hat OpenShift"
-description: Shows you how to quickly stand up Red Hat JBoss EAP on Azure Red Hat OpenShift.
+description: Shows you how to quickly set up Red Hat JBoss EAP on Azure Red Hat OpenShift using the Azure portal.
Previously updated : 05/29/2024 Last updated : 06/26/2024
+#customer intent: As a developer, I want to learn how to deploy JBoss EAP on Azure Red Hat OpenShift quickly.
# Quickstart: Deploy JBoss EAP on Azure Red Hat OpenShift
-This article shows you how to quickly stand up JBoss Enterprise Application Platform (EAP) on Azure Red Hat OpenShift (ARO) using the Azure portal.
+This article shows you how to quickly set up JBoss Enterprise Application Platform (EAP) on Azure Red Hat OpenShift (ARO) using the Azure portal.
-This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to ARO. The offer automatically provisions resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro).
+This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to ARO. The offer automatically provisions resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on ARO that doesn't use the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro).
If you're interested in providing feedback or working closely on your migration scenarios with the engineering team developing JBoss EAP on Azure solutions, fill out this short [survey on JBoss EAP migration](https://aka.ms/jboss-on-azure-survey) and include your contact information. The team of program managers, architects, and engineers will promptly get in touch with you to initiate close collaboration. ## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A Red Hat account with complete profile. If you don't have one, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). - A local developer command line with a UNIX-like command environment - for example, Ubuntu, macOS, or Windows Subsystem for Linux - and Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli). -- The `mysql` CLI. You can install the CLI by using the following commands:-
-```azurecli-interactive
-sudo apt update
-sudo apt install mysql-server
-```
-
-> [!NOTE]
-> You can also execute this guidance from the [Azure Cloud Shell](/azure/cloud-shell/quickstart). This approach has all the prerequisite tools pre-installed.
->
-> :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
--- Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role and the [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)-
-> [!NOTE]
-> Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription does not meet this requirement. To request an increase in your resource limit, see [Standard quota: Increase limits by VM series](/azure/azure-portal/supportability/per-vm-quota-requests). Note that the free trial subscription isn't eligible for a quota increase, [upgrade to a Pay-As-You-Go subscription](/azure/cost-management-billing/manage/upgrade-azure-subscription) before requesting a quota increase.
-
-## Get a Red Hat pull secret
-
-The Azure Marketplace offer used in this article requires a Red Hat pull secret. This section shows you how to get a Red Hat pull secret for Azure Red Hat OpenShift. To learn about what a Red Hat pull secret is and why you need it, see the [Get a Red Hat pull secret](create-cluster.md#get-a-red-hat-pull-secret-optional) section in [Create an Azure Red Hat OpenShift 4 cluster](/azure/openshift/create-cluster).
-
-Use the following steps to get the pull secret.
+ > [!NOTE]
+ > You can also execute this guidance from the [Azure Cloud Shell](../cloud-shell/get-started/classic.md). This approach has all the prerequisite tools pre-installed.
+ >
+ > :::image type="icon" source="~/reusable-content/ce-skilling/azure/media/cloud-shell/launch-cloud-shell-button.png" alt-text="Button to launch the Azure Cloud Shell." border="false" link="https://shell.azure.com":::
-1. Open the [Red Hat OpenShift Hybrid Cloud Console](https://console.redhat.com/openshift/install/azure/aro-provisioned), then use your Red Hat account to sign in to the OpenShift cluster manager portal. You may need to accept more terms and update your account as shown in the following screenshot. Use the same password as when you created the account.
-
- :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/red-hat-account-complete-profile.png" alt-text="Screenshot of Red Hat Update Your Account page." lightbox="media/howto-deploy-java-enterprise-application-platform-app/red-hat-account-complete-profile.png":::
+- The `mysql` CLI. You can install the CLI by using the following commands:
-1. After you sign in, select **OpenShift** then **Downloads**.
-1. Select the **All categories** dropdown list and then select **Tokens**.
-1. Under **Pull secret**, select **Copy** or **Download** to get the value, as shown in the following screenshot.
+ ```azurecli-interactive
+ sudo apt update
+ sudo apt install mysql-server
+ ```
- :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/red-hat-console-portal-pull-secret.png" alt-text="Screenshot of Red Hat console portal showing the pull secret." lightbox="media/howto-deploy-java-enterprise-application-platform-app/red-hat-console-portal-pull-secret.png":::
+- An Azure identity that you use to sign in that has either the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role and the [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) role or the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](../role-based-access-control/overview.md)
- The following content is an example that was copied from the Red Hat console portal, with the auth codes replaced with `xxxx...xxx`.
- ```json
- {"auths":{"cloud.openshift.com":{"auth":"xxxx...xxx","email":"contoso-user@contoso.com"},"quay.io":{"auth":"xxx...xxx","email":"contoso-user@test.com"},"registry.connect.redhat.com":{"auth":"xxxx...xxx","email":"contoso-user@contoso.com"},"registry.redhat.io":{"auth":"xxxx...xxx","email":"contoso-user@contoso.com"}}}
- ```
-1. Save the secret to a file so you can use it later.
<a name='create-an-azure-active-directory-service-principal-from-the-azure-portal'></a>
Use the following steps to get the pull secret.
The Azure Marketplace offer used in this article requires a Microsoft Entra service principal to deploy your Azure Red Hat OpenShift cluster. The offer assigns the service principal with proper privileges during deployment time, with no role assignment needed. If you have a service principal ready to use, skip this section and move on to the next section, where you create a Red Hat Container Registry service account.
-Use the following steps to deploy a service principal and get its Application (client) ID and secret from the Azure portal. For more information, see [Create and use a service principal to deploy an Azure Red Hat OpenShift cluster](/azure/openshift/howto-create-service-principal?pivots=aro-azureportal).
+Use the following steps to deploy a service principal and get its Application (client) ID and secret from the Azure portal. For more information, see [Create and use a service principal to deploy an Azure Red Hat OpenShift cluster](howto-create-service-principal.md?pivots=aro-azureportal).
> [!NOTE]
-> You must have sufficient permissions to register an application with your Microsoft Entra tenant. If you run into a problem, check the required permissions to make sure your account can create the identity. For more information, see the [Permissions required for registering an app](/azure/active-directory/develop/howto-create-service-principal-portal#permissions-required-for-registering-an-app) section of [Use the portal to create a Microsoft Entra application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal).
+> You must have sufficient permissions to register an application with your Microsoft Entra tenant. If you run into a problem, check the required permissions to make sure your account can create the identity. For more information, see [Register a Microsoft Entra app and create a service principal](/entra/identity-platform/howto-create-service-principal-portal).
1. Sign in to your Azure account through the [Azure portal](https://portal.azure.com/). 1. Select **Microsoft Entra ID**.
Use the following steps to deploy a service principal and get its Application (c
1. Select **New registration**. 1. Name the application - for example, `jboss-eap-on-aro-app`. Select a supported account type, which determines who can use the application. After setting the values, select **Register**, as shown in the following screenshot. It takes several seconds to provision the application. Wait for the deployment to complete before proceeding.
- :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/create-service-principal.png" alt-text="Screenshot of Azure portal showing the Register an application page." lightbox="media/howto-deploy-java-enterprise-application-platform-app/create-service-principal.png":::
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/create-service-principal.png" alt-text="Screenshot of the Azure portal that shows the Register an application page." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/create-service-principal.png":::
1. Save the Application (client) ID from the overview page, as shown in the following screenshot. Hover the pointer over the value, which is redacted in the screenshot, and select the copy icon that appears. The tooltip says **Copy to clipboard**. Be careful to copy the correct value, since the other values in that section also have copy icons. Save the Application ID to a file so you can use it later.
- :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/obtain-service-principal-client-id.png" alt-text="Screenshot of Azure portal showing service principal client ID." lightbox="media/howto-deploy-java-enterprise-application-platform-app/obtain-service-principal-client-id.png":::
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/obtain-service-principal-client-id.png" alt-text="Screenshot of the Azure portal that shows the Overview page with the Application (client) ID highlighted." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/obtain-service-principal-client-id.png":::
1. Create a new client secret by following these steps:
Use the following steps to deploy a service principal and get its Application (c
You created your Microsoft Entra application, service principal, and client secret.
-## Create a Red Hat Container Registry service account
-
-Later, this article shows you how to manually deploy an application to OpenShift using Source-to-Image (S2I). A Red Hat Container Registry service account is necessary to pull the container image for JBoss EAP on which to run your application. If you have a Red Hat Container Registry service account ready to use, skip this section and move on to the next section, where you deploy the offer.
-
-Use the following steps to create a Red Hat Container Registry service account and get its username and password. For more information, see [Creating Registry Service Accounts](https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6) in the Red Hat documentation.
-
-1. Use your Red Hat account to sign in to the [Registry Service Account Management Application](https://access.redhat.com/terms-based-registry/).
-1. From the **Registry Service Accounts** page, select **New Service Account**.
-1. Provide a name for the Service Account. The name is prepended with a fixed, random string.
- - Enter a description.
- - Select **create**.
-1. Navigate back to your Service Accounts.
-1. Select the Service Account you created.
- - Note down the **username**, including the prepended string (that is, `XXXXXXX|username`). Use this username when you sign in to `registry.redhat.io`.
- - Note down the **password**. Use this password when you sign in to `registry.redhat.io`.
-
-You created your Red Hat Container Registry service account.
## Deploy JBoss EAP on Azure Red Hat OpenShift
The following steps show you how to find the offer and fill out the **Basics** p
1. In the search bar at the top of the Azure portal, enter *JBoss EAP*. In the search results, in the **Marketplace** section, select **JBoss EAP on Azure Red Hat OpenShift**, as shown in the following screenshot.
- :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/marketplace-search-results.png" alt-text="Screenshot of Azure portal showing JBoss EAP on Azure Red Hat OpenShift in search results." lightbox="media/howto-deploy-java-enterprise-application-platform-app/marketplace-search-results.png":::
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/marketplace-search-results.png" alt-text="Screenshot of the Azure portal that shows JBoss EAP on Azure Red Hat OpenShift in search results." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/marketplace-search-results.png":::
You can also go directly to the [JBoss EAP on Azure Red Hat OpenShift offer](https://aka.ms/eap-aro-portal) on the Azure portal.
The following steps show you how to find the offer and fill out the **Basics** p
The following steps show you how to fill out the **ARO** pane shown in the following screenshot: 1. Under **Create a new cluster**, select **Yes**.
The following steps show you how to fill out the **ARO** pane shown in the follo
The following steps show you how to fill out the **EAP Application** pane shown in the following screenshot, and then start the deployment. 1. Leave the default option of **No** for **Deploy an application to OpenShift using Source-to-Image (S2I)?**.
The following steps show you how to fill out the **EAP Application** pane shown
1. Track the progress of the deployment on the **Deployment is in progress** page.
-Depending on network conditions and other activity in your selected region, the deployment may take up to 35 minutes to complete.
+Depending on network conditions and other activity in your selected region, the deployment might take up to 35 minutes to complete.
While you wait, you can set up the database.
Replace the placeholders with the following values, which are used throughout th
It's a good idea to save the fully filled out name/value pairs in a text file, in case the shell exits before you're done executing the commands. That way, you can paste them into a new instance of the shell and easily continue.
-These name/value pairs are essentially "secrets." For a production-ready way to secure Azure Red Hat OpenShift, including secret management, see [Security for the Azure Red Hat OpenShift landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/azure-red-hat-openshift/security).
+These name/value pairs are essentially "secrets". For a production-ready way to secure Azure Red Hat OpenShift, including secret management, see [Security for the Azure Red Hat OpenShift landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/azure-red-hat-openshift/security).
### Create and initialize the database
Next, use the following steps to create an Azure Database for MySQL - Flexible S
--yes ```
- This command may take ten or more minutes to complete. When the command successfully completes, you see output similar to the following example:
+ This command might take ten or more minutes to complete. When the command successfully completes, you see output similar to the following example:
```output {
If you navigated away from the **Deployment is in progress** page, the following
1. Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the preceding section. Select the oldest deployment, as shown in the following screenshot.
- :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/deployments.png" alt-text="Screenshot of Azure portal showing JBoss EAP on Azure Red Hat OpenShift deployments with the oldest deployment highlighted." lightbox="media/howto-deploy-java-enterprise-application-platform-app/deployments.png":::
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployments.png" alt-text="Screenshot of the Azure portal that shows JBoss EAP on Azure Red Hat OpenShift deployments with the oldest deployment highlighted." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployments.png":::
1. In the navigation pane, select **Outputs**. This list shows the output values from the deployment, which includes some useful information.
If you navigated away from the **Deployment is in progress** page, the following
1. Paste the value from the **consoleUrl** field into an Internet-connected web browser, and then press <kbd>Enter</kbd>. Fill in the admin user name and password, then select **Log in**. In the admin console of Azure Red Hat OpenShift, select **Operators** > **Installed Operators**, where you can find that the **JBoss EAP** operator is successfully installed, as shown in the following screenshot.
- :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/red-hat-openshift-cluster-console-portal-operators.png" alt-text="Screenshot of Red Hat OpenShift cluster console portal showing Installed operators page." lightbox="media/howto-deploy-java-enterprise-application-platform-app/red-hat-openshift-cluster-console-portal-operators.png":::
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/red-hat-openshift-cluster-console-portal-operators.png" alt-text="Screenshot of the Red Hat OpenShift cluster console portal that shows the Installed operators page." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/red-hat-openshift-cluster-console-portal-operators.png":::
Next, use the following steps to connect to the OpenShift cluster using the OpenShift CLI:
The steps in this section show you how to deploy an app on the cluster.
Use the following steps to deploy the app to the cluster. The app is hosted in the GitHub repo [rhel-jboss-templates/eap-coffee-app](https://github.com/Azure/rhel-jboss-templates/tree/main/eap-coffee-app).
-1. In the shell, run the following commands. The commands create a project, apply a permission to enable S2I to work, image the pull secret, and link the secret to the relative service accounts in the project to enable the image pull. Disregard the git warning about "'detached HEAD' state."
+1. In the shell, run the following commands. The commands create a project, apply a permission to enable S2I to work, image the pull secret, and link the secret to the relative service accounts in the project to enable the image pull. Disregard the Git warning about "'detached HEAD' state".
```azurecli-interactive git clone https://github.com/Azure/rhel-jboss-templates.git
Use the following steps to deploy the app to the cluster. The app is hosted in t
Because the next section uses HEREDOC format, it's best to include and execute it in its own code excerpt. ```azurecli-interactive- cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret
Next, use the following steps to create a secret:
javaee-cafe-0 1/1 Running 0 30s ```
- It may take a few minutes to reach the proper state. You may even see `STATUS` column values including `ErrImagePull` and `ImagePullBackOff` before `Running` is shown.
+ It might take a few minutes to reach the proper state. You might even see `STATUS` column values including `ErrImagePull` and `ImagePullBackOff` before `Running` is shown.
1. Run the following command to return the URL of the application. You can use this URL to access the deployed sample app. Copy the output to the clipboard.
Next, use the following steps to create a secret:
1. Paste the output into an Internet-connected web browser, and then press <kbd>Enter</kbd>. You should see the UI of **Java EE Cafe** app similar to the following screenshot:
- :::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/javaee-cafe-ui.png" alt-text="Screenshot of Java EE Cafe app UI." lightbox="media/howto-deploy-java-enterprise-application-platform-app/javaee-cafe-ui.png":::
-
-1. Add and delete some rows to verify the database connectivity is correctly functioning.
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/javaee-cafe-ui.png" alt-text="Screenshot of the Java EE Cafe sample app UI." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/javaee-cafe-ui.png":::
-## Clean up resources
-
-If you're not going to continue to use the OpenShift cluster, navigate back to your working resource group. At the top of the page, under the text **Resource group**, select the resource group. Then, select **Delete resource group**.
+1. Add and delete some rows to verify the database connectivity is correctly functioning.
-## Next steps
-For more information about deploying JBoss EAP on Azure, see [Red Hat JBoss EAP on Azure](/azure/developer/java/ee/jboss-on-azure).
openshift Howto Deploy Java Jboss Enterprise Application Platform With Auto Redeploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-with-auto-redeploy.md
+
+ Title: Auto-redeploy JBoss EAP with Source-to-Image
+titleExtension: Azure Red Hat OpenShift
+description: Shows you how to quickly set up JBoss EAP on Azure Red Hat OpenShift (ARO) using the Azure portal and deploy an app with the Source-to-Image (S2I) feature.
+++ Last updated : 06/26/2024+
+# customer intent: As a developer, I want to learn how to auto redeploy JBoss EAP on Azure Red Hat OpenShift using Source-to-Image (S2I) so that I can quickly deploy and update my application.
++
+# Quickstart: Auto-redeploy JBoss EAP on Azure Red Hat OpenShift with Source-to-Image (S2I)
+
+This article shows you how to quickly set up JBoss Enterprise Application Platform (EAP) on Azure Red Hat OpenShift (ARO) and deploy an app with the Source-to-Image (S2I) feature. The Source-to-Image feature enables you to build container images from source code without having to write Dockerfiles. The article uses a sample application that you can fork from GitHub and deploy to Azure Red Hat OpenShift. The article also shows you how to set up a webhook in GitHub to trigger a new build in OpenShift every time you push a change to the repository.
+
+This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to ARO. The offer automatically provisions resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on ARO that doesn't use the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro).
+
+## Prerequisites
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
+
+- A Red Hat account with complete profile. If you don't have one, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register).
+
+- A local developer command line with a UNIX-like command environment - for example, Ubuntu, macOS, or Windows Subsystem for Linux - and Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+- An Azure identity that you use to sign in that has either the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role and the [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) role or the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](../role-based-access-control/overview.md)
++++
+## Create a Microsoft Entra service principal
+
+Use the following steps to create a service principal:
+
+1. Open the Azure portal and navigate to the Azure Cloud Shell.
+1. Create a service principal by using the following command:
+
+ ```azurecli
+ az ad sp create-for-rbac --name "sp-aro-s2i-$(date +%s)"
+ ```
+
+ This command produces output similar to the following example:
+
+ ```output
+ {
+ "appId": <app-ID>,
+ "displayName": <display-Name>,
+ "password": <password>,
+ "tenant": <tenant>
+ }
+ ```
+
+1. Copy the value of the `appId` and `password` fields. You use these values later in the deployment process.
+
+## Fork the repository on GitHub
+
+Use the following steps to fork the sample repo:
+
+1. Open the repository <https://github.com/redhat-mw-demos/eap-on-aro-helloworld> in your browser.
+1. Fork the repository to your GitHub account.
+1. Copy the URL of the forked repository.
+
+## Deploy JBoss EAP on Azure Red Hat OpenShift
+
+This section shows you how to deploy JBoss EAP on Azure Red Hat OpenShift.
+
+Use the following steps to find the offer and fill out the **Basics** pane:
+
+1. In the search bar at the top of the Azure portal, enter *JBoss EAP*. In the search results, in the **Marketplace** section, select **JBoss EAP on Azure Red Hat OpenShift**, as shown in the following screenshot:
+
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/marketplace-search-results.png" alt-text="Screenshot of the Azure portal that shows JBoss EAP on Azure Red Hat OpenShift in search results." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/marketplace-search-results.png":::
+
+ You can also go directly to the [JBoss EAP on Azure Red Hat OpenShift offer](https://aka.ms/eap-aro-portal) on the Azure portal.
+
+1. On the offer page, select **Create**.
+
+1. On the **Basics** pane, ensure that the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section.
+
+1. You must deploy the offer in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, *eaparo033123rg*.
+
+1. Under **Instance details**, select the region for the deployment. For a list of Azure regions where OpenShift operates, see [Regions for Red Hat OpenShift 4.x on Azure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=openshift&regions=all).
+
+1. Select **Next: ARO**.
+
+Use the following steps to fill out the **ARO** pane shown in the following screenshot:
++
+1. Under **Create a new cluster**, select **Yes**.
+
+1. Under **Provide information to create a new cluster**, for **Red Hat pull secret**, use the Red Hat pull secret that you obtained in the [Get a Red Hat pull secret](#get-a-red-hat-pull-secret) section. Use the same value for **Confirm secret**.
+
+1. For **Service principal client ID**, use the `appId` value that you obtained in the [Create a Microsoft Entra service principal](#create-a-microsoft-entra-service-principal) section.
+
+1. For **Service principal client secret**, use the `password` value that you obtained in the [Create a Microsoft Entra service principal](#create-a-microsoft-entra-service-principal) section. Use the same value for **Confirm secret**.
+
+1. Select **Next EAP Application**.
+
+The following steps show you how to fill out the **EAP Application** pane shown in the following screenshot, and then start the deployment.
++
+1. Select **YES** for **Deploy an application to OpenShift using Source-to-Image (S2I)?**.
+1. For **Deploy your own application or a sample application?**, select **Your own application**.
+1. For **Application source code repository URL**, use the URL of the forked repository that you created in the [Fork repository from GitHub](#fork-the-repository-on-github) section.
+1. For **Red Hat Container Registry Service account username**, use the username of the Red Hat Container Registry service account that you created in the [Create a Red Hat Container Registry service account](#create-a-red-hat-container-registry-service-account) section.
+1. For **Red Hat Container Registry Service account password**, use the password of the Red Hat Container Registry service account that you created in the [Create a Red Hat Container Registry service account](#create-a-red-hat-container-registry-service-account) section.
+1. For **Confirm password**, use the same value as in the previous step.
+1. Leave other fields with default values.
+1. Select **Next: Review + create**.
+1. Select **Review + create**. Ensure that the green **Validation Passed** message appears at the top. If the message doesn't appear, fix any validation problems, and then select **Review + create** again.
+1. Select **Create**.
+1. Track the progress of the deployment on the **Deployment is in progress** page.
+
+Depending on network conditions and other activity in your selected region, the deployment might take up to 40 minutes to complete.
+
+## Verify the functionality of the deployment
+
+This section shows you how to verify that the deployment completed successfully.
+
+If you navigated away from the **Deployment is in progress** page, use the following steps to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to step 5.
+
+1. In the corner of any Azure portal page, select the hamburger menu and then select **Resource groups**.
+
+1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group.
+
+1. In the navigation pane, in the **Settings** section, select **Deployments**. You see an ordered list of the deployments to this resource group, with the most recent one first.
+
+1. Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the preceding section. Select the oldest deployment, as shown in the following screenshot.
+
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployments.png" alt-text="Screenshot of the Azure portal that shows JBoss EAP on Azure Red Hat OpenShift deployments with the oldest deployment highlighted." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployments.png":::
+
+1. In the navigation pane, select **Outputs**. This list shows the output values from the deployment, which includes some useful information like **cmdToGetKubeadminCredentials** and **consoleUrl**.
+
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployment-outputs.png" alt-text="Screenshot of the Azure portal that shows JBoss EAP on Azure Red Hat OpenShift deployment outputs." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/deployment-outputs.png":::
+
+1. Open the Azure Cloud Shell, paste the value from the **cmdToGetKubeadminCredentials** field, and execute it. You see the admin account and credential for signing in to the OpenShift cluster console portal. The following example shows an admin account:
+
+ ```azurecli
+ az aro list-credentials -g eaparo033123rg -n aro-cluster
+ ```
+
+ This command produces output similar to the following example:
+
+ ```output
+ {
+ "kubeadminPassword": "xxxxx-xxxxx-xxxxx-xxxxx",
+ "kubeadminUsername": "kubeadmin"
+ }
+ ```
+
+1. Paste the value from the **consoleUrl** field into an Internet-connected web browser, and then press <kbd>Enter</kbd>.
+1. Fill in the admin user name and password, then select **Log in**.
+1. In the admin console of Azure Red Hat OpenShift, select **Operators** > **Installed Operators**, where you can find that the **JBoss EAP** operator is successfully installed, as shown in the following screenshot:
+
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/red-hat-openshift-cluster-console-portal-operators.png" alt-text="Screenshot of the Red Hat OpenShift cluster console portal that shows the Installed operators page." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/red-hat-openshift-cluster-console-portal-operators.png":::
+
+1. Paste the value from the **appEndpoint** field into an Internet-connected web browser, and then press <kbd>Enter</kbd>. You see the JBoss EAP application running on Azure Red Hat OpenShift, as shown in the following screenshot:
+
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/jboss-eap-application.png" alt-text="Screenshot of the JBoss EAP application running on Azure Red Hat OpenShift." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/jboss-eap-application.png":::
++
+## Set up webhooks with OpenShift
+
+This section shows you how to set up and use GitHub webhooks with OpenShift.
+
+### Get the GitHub webhook URL
+
+Use the following steps to get the webhook URL:
+
+1. Navigate to the **OpenShift Web Console** with the URL provided in the **consoleUrl** field.
+1. Navigate to **Builds** > **BuildConfigs** > **eap-app-build-artifacts**.
+1. Select **Copy URL with Secret**, as shown in the following screenshot:
+
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/github-webhook-url.png" alt-text="Screenshot of the Red Hat OpenShift cluster console portal BuildConfig details page with the Copy URL with Secret link highlighted." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/github-webhook-url.png":::
+
+### Configure GitHub webhooks
+
+Use the following steps to configure webhooks:
+
+1. Open the forked repository in your GitHub account.
+1. Navigate to the **Settings** tab.
+1. Navigate to the **Webhooks** tab.
+1. Select **Add webhook**.
+1. Paste the **URL with Secret** value into the **Payload URL** field.
+1. Change the **Content type** value to **application/json**.
+1. For **Which events would you like to trigger this webhook?**, select **Just the push event**.
+1. Select **Add webhook**.
+
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/github-webhook-settings.png" alt-text="Screenshot of GitHub that shows the Settings tab and Webhooks pane." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/github-webhook-settings.png":::
+
+From now on, every time you push a change to the repository, the webhook triggers a new build in OpenShift.
+
+### Test the GitHub webhooks
+
+Use the following steps to test the webhooks:
+
+1. Select the **Code** tab in the forked repository.
+1. Navigate to the *src/main/webapp/https://docsupdatetracker.net/index.html* file.
+1. After you have the file on the screen, navigate to the **Edit** button.
+1. Change the line 38 from `<h1 class="display-4">JBoss EAP on Azure Red Hat OpenShift</h1>` to `<h1 class="display-4">JBoss EAP on Azure Red Hat OpenShift - Updated - 01 </h1>`.
+1. Select **Commit changes**.
+
+After you commit the changes, the webhook triggers a new build in OpenShift. From the OpenShift Web Console, navigate to **Builds** > **Builds** to see a new build in **Running** status.
+
+### Verify the update
+
+Use the following steps to verify the update:
+
+1. After the build completes, navigate to **Builds** > **Builds** to see two new builds in **Complete** status.
+1. Open a new browser tab and navigate to the **appEndpoint** URL.
+
+ You should see the updated message on the screen.
+
+ :::image type="content" source="media/howto-deploy-java-jboss-enterprise-application-platform-app/jboss-eap-application-with-updated-info.png" alt-text="Screenshot of the JBoss EAP sample application with updated information." lightbox="media/howto-deploy-java-jboss-enterprise-application-platform-app/jboss-eap-application-with-updated-info.png":::
++
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
If you're interested in providing feedback or working closely on your migration
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- A local machine with a Unix-like operating system installed (for example, Ubuntu, macOS, or Windows Subsystem for Linux). - The [Azure CLI](/cli/azure/install-azure-cli). If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker). - Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
The Azure Marketplace offer you're going to use in this article requires a Micro
Use the following steps to deploy a service principal and get its Application (client) ID and secret from the Azure portal. For more information, see [Create and use a service principal to deploy an Azure Red Hat OpenShift cluster](/azure/openshift/howto-create-service-principal?pivots=aro-azureportal). > [!NOTE]
-> You must have sufficient permissions to register an application with your Microsoft Entra tenant. If you run into a problem, check the required permissions to make sure your account can create the identity. For more information, see the [Permissions required for registering an app](/azure/active-directory/develop/howto-create-service-principal-portal#permissions-required-for-registering-an-app) section of [Use the portal to create a Microsoft Entra application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal).
+> You must have sufficient permissions to register an application with your Microsoft Entra tenant. If you run into a problem, check the required permissions to make sure your account can create the identity. For more information, see [Register a Microsoft Entra app and create a service principal](/entra/identity-platform/howto-create-service-principal-portal).
1. Sign in to your Azure account through the [Azure portal](https://portal.azure.com/). 1. Select **Microsoft Entra ID**.
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
zone_pivot_groups: azure-red-hat-openshift
This quickstart describes how to use either Azure Resource Manager template (ARM template) or Bicep to create an Azure Red Hat OpenShift cluster. You can deploy the Azure Red Hat OpenShift cluster with either PowerShell or the Azure command-line interface (Azure CLI). Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner.
operator-nexus Howto Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-service-principal.md
+
+ Title: Azure Operator Nexus service principal best practices
+description: Guidance how to properly use Service Principals in Operator Nexus.
+++ Last updated : 06/12/2024++++
+# Service principal best practices
+
+Service principals in Azure are identity entities that are used by applications, services, and automation tools to access specific Azure resources. They can be thought of as 'users' for applications, allowing these applications to interact with Azure services. Service principals provide and control permissions to Azure resources within your subscription, allowing you to specify exactly what actions an application can perform in your environment.
+
+For more information on how to create a Service principal, an existing Azure Learn [documentation](/entra/architecture/service-accounts-principal) goes into Service Principal fundamentals.
+
+## Service principals in Operator Nexus
+
+A single customer-provided Service principal is used by Operator Nexus to facilitate the connectivity between Azure and the on-premises cluster.
+
+## Creating a service principal
+
+For information on how to rotate a service principal, reference [how to create a service principal](../active-directory/develop/howto-create-service-principal-portal.md).
+
+## Rotating a service principal
+
+For information on how to rotate a service principal, reference [how to rotate service principal](../operator-nexus/howto-service-principal-rotation.md).
+
+## Best practices
+
+The list is a high-level list of recommended security considerations to take into account when managing a new service principal.
+
+- **Least Privilege**: Assign the minimum permissions necessary for the service principal to perform its function. Avoid assigning broad permissions if they aren't needed.
+- **Lifecycle Management**: Regularly review and update service principals. Remove or disable them when not required.
+- **Use Managed Identities**: Where possible, use Azure Managed Identities instead of creating and managing service principals manually.
+- **Secure Secrets**: If a service principal uses a password (client secret), ensure credentials are stored securely. Consider using Azure Key Vault.
+- **Monitor Activity**: Use Azure Monitor and Azure Log Analytics to track the activities of your service principals.
+- **Rotation of Secrets**: Regularly rotate and change the service principal's secrets. The maximum recommended duration is 180 days.
+- **Use Azure Policy**: Implement Azure policies to audit and enforce best practices for service principals.
operator-nexus List Of Metrics Collected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/list-of-metrics-collected.md
The collection interval for Network Fabric device metrics varies and you can fin
| TemperatureInstant | Temperature Instantaneous | Resource Utilization | NA | NA | The instantaneous value of temperature in degrees Celsius of the component. | NA | Yes | Per minute | | PowerSupplyInputCurrent | Power Supply Input Current | Resource Utilization | Amps | Average | The input current draw of the power supply | NA | Yes | Per minute | | PowerSupplyInputVoltage | Power Supply Input Voltage | Resource Utilization | Volts | Average | The input voltage of the power supply | NA | Yes | Per minute |
-| PowerSupplyMaximumPowerCapacity | Power Supply Maximum Power Capacity | Resource Utilization | Watts | Average | Maximum power capacity of the power supply | NA | Yes | Per minute |
+| PowerSupplyCapacity | Power Supply Maximum Power Capacity | Resource Utilization | Watts | Average | Maximum power capacity of the power supply | NA | Yes | Per minute |
| PowerSupplyOutputCurrent | Power Supply Output Current | Resource Utilization | Amps | Average | The output current supplied by the power supply | NA | Yes | Per minute | | PowerSupplyOutputPower| Power Supply Output Power | Resource Utilization | Watts | Average | The output power supplied by the power supply | NA | Yes | Per minute | | PowerSupplyOutputVoltage | Power Supply Output Voltage | Resource Utilization | Volts | Average | The output voltage supplied the power supply | NA | Yes | Per minute |
The collection interval for Network Fabric device metrics varies and you can fin
| LacpUnknownErrors | LACP Unknown Errors | LACP State Counters | Count | Average | The count of LACPDU packets with unknown errors over a given interval of time | Interface name | Yes | Every 5 mins | | LldpFrameIn | LLDP Frame In | LLDP State Counters | Count | Average | The count of LLDP frames received by an interface over a given interval of time | Interface name | Yes | Every 5 mins | | LldpFrameOut | LLDP Frame Out | LLDP State Counters | Count | Average | The count of LLDP frames trasmitted from an interface over a given interval of time | Interface name | Yes | Every 5 mins |
-| LldpTlvUnknown | LLDP Tlv Unknown | LLDP State Counters | Count | Average | The count of LLDP frames received with unknown TLV by an interface over a given interval of time | Interface name | Yes | Every 5 mins |
+| LldpTlvUnknown | LLDP Tlv Unknown | LLDP State Counters | Count | Average | The count of LLDP frames received with unknown TLV by an interface over a given interval of time | Interface name | Yes | Every 5 mins |
operator-nexus Quickstarts Kubernetes Cluster Deployment Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-arm.md
Last updated 05/14/2023
This quickstart describes how to use an Azure Resource Manager template (ARM template) to create Azure Nexus Kubernetes cluster. ## Prerequisites
operator-nexus Quickstarts Kubernetes Cluster Deployment Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-bicep.md
Last updated 05/13/2023
* Deploy an Azure Nexus Kubernetes cluster using Bicep. ## Prerequisites
oracle Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md
Oracle Database@Azure is available in the following locations. Oracle Database@A
|Azure region|Oracle Exadata Database@Azure|Oracle Autonomous Database@Azure| |-|:-:|:--:| |East US (Virginia)|&check; | &check;|
-|Germany West Central (Frankfurt)| &check;| |
+|Germany West Central (Frankfurt)| &check;|&check; |
|France Central (Paris)|&check; | |
-|UK South (London)|&check; | |
-|Australia East (Sydney)| &check;| |
+|UK South (London)|&check; |&check; |
+ ## Azure Support scope and contact information
partner-solutions Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-powershell.md
After you've selected the offer for Apache Kafka & Apache Flink on Confluent Clo
Start by preparing your environment for Azure PowerShell: > [!IMPORTANT] > While the **Az.Confluent** PowerShell module is in preview, you must install it separately using the `Install-Module` cmdlet.
payment-hsm Create Different Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-ip-addresses.md
This tutorial describes how to use an Azure Resource Manager template (ARM templ
- [Create a payment HSM with the host and management port in different virtual networks using an ARM template](create-different-vnet-template.md) - [Create HSM resource with host and management port with IP addresses in different virtual networks using ARM template](create-different-ip-addresses.md) ## Prerequisites
payment-hsm Create Different Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-vnet-template.md
This tutorial describes how to create a payment HSM with static host and managem
- [Create HSM resource with host and management port with IP addresses in different virtual networks using ARM template](create-different-ip-addresses.md) ## Prerequisites
payment-hsm Create Different Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-vnet.md
This tutorial describes how to create a payment HSM with the host and management
- [Create a payment HSM with the host and management port in different virtual networks using an ARM template](create-different-vnet-template.md) - [Create HSM resource with host and management port with IP addresses in different virtual networks using ARM template](create-different-ip-addresses.md) ## Prerequisites
payment-hsm Create Payment Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-payment-hsm.md
Get-AzProviderFeature -FeatureName "FastPathEnabled" -ProviderNamespace Microsof
- You must have an Azure subscription. You can [create a free account](https://azure.microsoft.com/free/) if you don't have one. - You must install the Az.DedicatedHsm PowerShell module:
payment-hsm Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-powershell.md
You can continue with this quick start if the "RegistrationState" of both comman
Set-AzContext -Subscription "<subscription-id>" ``` - You must install the Az.DedicatedHsm PowerShell module:
payment-hsm Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-template.md
This quickstart describes how to create a payment HSM with the host and manageme
- [Create a payment HSM with host and management port in different virtual network using an ARM template](create-different-vnet.md) - [Create HSM resource with host and management port with IP addresses in different virtual networks using ARM template](create-different-ip-addresses.md) ## Prerequisites
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-advisor-recommendations.md
description: Learn about Azure Advisor recommendations for Azure Database for Po
Previously updated : 04/27/2024 Last updated : 06/14/2024
Learn about how Azure Advisor is applied to Azure Database for PostgreSQL flexible server and get answers to common questions. ## What is Azure Advisor for PostgreSQL? The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your Azure Database for PostgreSQL flexible server database.
-Advisor recommendations are split among our Azure Database for PostgreSQL flexible server database offerings:
-* Azure Database for PostgreSQL single server
-* Azure Database for PostgreSQL flexible server
Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations. ## Where can I view my recommendations?
Recommendations are available from the **Overview** navigation sidebar in the Az
:::image type="content" source="../media/concepts-azure-advisor-recommendations/advisor-example.png" alt-text="Screenshot of the Azure portal showing an Azure Advisor recommendation."::: ## Recommendation types
-Azure Database for PostgreSQL flexible server prioritizes the following types of recommendations:
-* **Performance**: To improve the speed of your Azure Database for PostgreSQL flexible server instance. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../../advisor/advisor-reference-performance-recommendations.md).
-* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, and connection limits. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-reference-reliability-recommendations.md).
-* **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../../advisor/advisor-reference-cost-recommendations.md).
+Azure Database for PostgreSQL flexible server prioritizes the following type of recommendation:
+* **Performance**: To enhance the performance of your Azure Database for PostgreSQL flexible server instance, the recommendations proactively identify servers experiencing scenarios which can impact performance. These scenarios include high CPU utilization, frequent checkpoint initiations, performance-impacting log parameter settings, inactive logical replication slots, long-running transactions, orphaned prepared transactions, a high bloat ratio, and transaction wraparound risks. For more information, see [Advisor Performance recommendations](../../advisor/advisor-performance-recommendations.md).
## Understanding your recommendations
-* **Daily schedule**: For Azure Database for PostgreSQL flexible server databases, we check server telemetry and issue recommendations on a twice a day schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry at either 7PM or 7AM according to PST.
-* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations are paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
+* **Daily schedule**: For Azure Database for PostgreSQL flexible server databases, we review server telemetry and issue recommendations daily. If you make changes to your server configuration, the existing recommendations will remain visible until we re-evaluate the recommendation the following day, approximately 24 hours later.
+* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g., high CPU activity or high connection volume) over a sustained period. If you provisioned a new server or change to a new vCore configuration, these recommendations are paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
## Next steps For more information, see [Azure Advisor Overview](../../advisor/advisor-overview.md).
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
All backups required to perform a PITR within the backup retention period are re
Azure Database for PostgreSQL flexible server provides up to 100 percent of your provisioned server storage as backup storage at no extra cost. Any additional backup storage that you use is charged in gigabytes per month.
-For example, if you have provision a server with 250 gibibytes (GiB) of storage, then you have 250 GiB of backup storage capacity at no additional charge. If the daily backup usage is 25 GiB, then you can have up to 10 days of free backup storage. Backup storage consumption that exceeds 250 GiB is charged as defined in the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
+For example, if you have provisioned a server with 250 gibibytes (GiB) of storage, then you have 250 GiB of backup storage capacity at no additional charge. If the daily backup usage is 25 GiB, then you can have up to 10 days of free backup storage. Backup storage consumption that exceeds 250 GiB is charged as defined in the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
If you configured your server with geo-redundant backup, the backup data is also copied to the Azure paired region. So, your backup size will be twice the size of the local backup copy. Billing is calculated as *( (2 x local backup size) - provisioned storage size ) x price @ gigabytes per month*.
Azure Backup and Azure Database for PostgreSQL flexible server services have bui
- Backups are stored in separate security and fault domains. If the source server or subscription is compromised, the backups remain safe in the Backup vault (in Azure Backup managed storage accounts). - Using pg_dump allows greater flexibility in restoring data across different database versions. - Azure backup vaults support immutability and soft delete (preview) features, protecting your data.
+- LTR backup support for CMK-enabled servers
#### Limitations and considerations
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
description: Learn about the available PostgreSQL extensions in Azure Database f
Previously updated : 05/8/2024 Last updated : 06/27/2024
Before installing extensions in Azure Database for PostgreSQL flexible server, y
Using the [Azure portal](https://portal.azure.com): 1. Select your Azure Database for PostgreSQL flexible server instance.
- 1. On the sidebar, select **Server Parameters**.
+ 1. From the resource menu, under **Settings** section, select **Server parameters**.
1. Search for the `azure.extensions` parameter.
- 1. Select extensions you wish to allowlist.
+ 1. Select the extensions you wish to allowlist.
:::image type="content" source="./media/concepts-extensions/allow-list.png" alt-text="Screenshot showing Azure Database for PostgreSQL flexible server - allow-listing extensions for installation." lightbox="./media/concepts-extensions/allow-list.png"::: Using [Azure CLI](/cli/azure/):
Using [Azure CLI](/cli/azure/):
You can allowlist extensions via CLI parameter set [command](/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true). ```azurecli
-az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name azure.extensions --value <extension name>,<extension name>
+az postgres flexible-server parameter set --resource-group <resource_group> --server-name <server> --subscription <subscription_id> --name azure.extensions --value <extension_name>,<extension_name>
``` Using [ARM Template](../../azure-resource-manager/templates/index.yml):
- Example shown below allowlists extensions dblink, dict_xsyn, pg_buffercache on the server mypostgreserver
+ Following example allowlists extensions `dblink`, `dict_xsyn`, `pg_buffercache` on a server whose name is `postgres-test-server`:
```json {
az postgres flexible-server parameter set --resource-group <your resource group>
"contentVersion": "1.0.0.0", "parameters": { "flexibleServers_name": {
- "defaultValue": "mypostgreserver",
+ "defaultValue": "postgres-test-server",
"type": "String" }, "azure_extensions_set_value": {
az postgres flexible-server parameter set --resource-group <your resource group>
} ```
-`shared_preload_libraries` is a server configuration parameter determining which libraries are to be loaded when Azure Database for PostgreSQL flexible server starts. Any libraries, which use shared memory must be loaded via this parameter. If your extension needs to be added to shared preload libraries this action can be done:
+`shared_preload_libraries` is a server configuration parameter that determines which libraries have to be loaded when Azure Database for PostgreSQL flexible server starts. Any libraries that use shared memory must be loaded via this parameter. If your extension needs to be added to shared preload libraries, follow these steps:
Using the [Azure portal](https://portal.azure.com): 1. Select your Azure Database for PostgreSQL flexible server instance.
- 1. On the sidebar, select **Server Parameters**.
+ 1. From the resource menu, under **Settings** section, select **Server parameters**.
1. Search for the `shared_preload_libraries` parameter.
- 1. Select extensions you wish to add.
+ 1. Select the libraries you wish to add.
:::image type="content" source="./media/concepts-extensions/shared-libraries.png" alt-text="Screenshot showing Azure Database for PostgreSQL -setting shared preload libraries parameter setting for extensions installation." lightbox="./media/concepts-extensions/shared-libraries.png"::: Using [Azure CLI](/cli/azure/):
- You can set `shared_preload_libraries` via CLI parameter set [command](/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
+ You can set `shared_preload_libraries` via CLI [parameter set](/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true) command.
```azurecli
-az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name shared_preload_libraries --value <extension name>,<extension name>
+az postgres flexible-server parameter set --resource-group <resource_group> --server-name <server> --subscription <subscription_id> --name shared_preload_libraries --value <extension_name>,<extension_name>
```
-After extensions are allow-listed and loaded, these must be installed in your database before you can use them. To install a particular extension, you should run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. This command loads the packaged objects into your database.
+After extensions are allowlisted and loaded, they must be installed in each database on which you plan to use them. To install a particular extension, you should run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. This command loads the packaged objects into your database.
> [!NOTE] > Third party extensions offered in Azure Database for PostgreSQL flexible server are open source licensed code. Currently, we don't offer any third party extensions or extension versions with premium or proprietary licensing models.
-Azure Database for PostgreSQL flexible server instance supports a subset of key PostgreSQL extensions as listed below. This information is also available by running `SHOW azure.extensions;`. Extensions not listed in this document aren't supported on Azure Database for PostgreSQL flexible server. You can't create or load your own extension in Azure Database for PostgreSQL flexible server.
+Azure Database for PostgreSQL flexible server instance supports a subset of key PostgreSQL extensions as listed in the following table. This information is also available by running `SHOW azure.extensions;`. Extensions not listed in this document aren't supported on Azure Database for PostgreSQL flexible server. You can't create or load your own extension in Azure Database for PostgreSQL flexible server.
## Extension versions
In-place upgrades of database extensions are allowed through a simple command. T
To update an installed extension to the latest available version supported by Azure, use the following SQL command: ```sql
-ALTER EXTENSION <extension-name> UPDATE;
+ALTER EXTENSION <extension_name> UPDATE;
``` This command simplifies the management of database extensions by allowing users to manually upgrade to the latest version approved by Azure, enhancing both compatibility and security. ### Limitations While updating extensions is straightforward, there are certain limitations:-- **Specific Version Selection**: The command does not support updating to intermediate versions of an extension. It will always update to the [latest available version](#extension-versions).
+- **Selection of a specific version**: The command does not support updating to intermediate versions of an extension. It always updates to the [latest available version](#extension-versions).
- **Downgrading**: Does not support downgrading an extension to a previous version. If a downgrade is necessary, it might require support assistance and depends on the availability of previous version.
-#### Viewing Installed Extensions
+#### Installed extensions
To list the extensions currently installed on your database, use the following SQL command: ```sql SELECT * FROM pg_extension; ```
-#### Available Extension Versions
-To check which versions of an extension are available for your current database installation, execute:
+#### Available extensions and their versions
+To check which versions of an extension are available for your current database installation, query the `pg_available_extensions` system catalog view. For example, to determine the version available for the `azure_ai`extension, execute:
```sql SELECT * FROM pg_available_extensions WHERE name = 'azure_ai';
SELECT * FROM pg_available_extensions WHERE name = 'azure_ai';
These commands provide necessary insights into the extension configurations of your database, helping maintain your systems efficiently and securely. By enabling easy updates to the latest extension versions, Azure Database for PostgreSQL continues to support the robust, secure, and efficient management of your database applications.
-## dblink and postgres_fdw
+## Considerations specific to Azure Database for PostgreSQL flexible server
+Following is a list of supported extensions that require some specific considerations when used in the Azure Database for PostgreSQL flexible server service. The list is alphabetically sorted.
-[dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) and [postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allow you to connect from one Azure Database for PostgreSQL flexible server instance to another, or to another database in the same server. Azure Database for PostgreSQL flexible server supports both incoming and outgoing connections to any PostgreSQL server. The sending server needs to allow outbound connections to the receiving server. Similarly, the receiving server needs to allow connections from the sending server.
+### dblink
-We recommend deploying your servers with [virtual network integration](concepts-networking.md) if you plan to use these two extensions. By default virtual network integration allows connections between servers in the virtual network. You can also choose to use [virtual network network security groups](../../virtual-network/manage-network-security-group.md) to customize access.
+[dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) allows you to connect from one Azure Database for PostgreSQL flexible server instance to another, or to another database in the same server. Azure Database for PostgreSQL flexible server supports both incoming and outgoing connections to any PostgreSQL server. The sending server needs to allow outbound connections to the receiving server. Similarly, the receiving server needs to allow connections from the sending server.
-## pg_prewarm
+We recommend deploying your servers with [virtual network integration](concepts-networking.md) if you plan to use this extension. By default virtual network integration allows connections between servers in the virtual network. You can also choose to use [virtual network network security groups](../../virtual-network/manage-network-security-group.md) to customize access.
-The `pg_prewarm` extension loads relational data into cache. Prewarming your caches means that your queries have better response times on their first run after a restart. The auto-prewarm functionality isn't currently available in Azure Database for PostgreSQL flexible server.
+### pg_buffercache
+
+`pg_buffercache` can be used to study the contents of *shared_buffers*. Using [this extension](https://www.postgresql.org/docs/current/pgbuffercache.html) you can tell if a particular relation is cached or not (in `shared_buffers`). This extension can help you troubleshooting performance issues (caching related performance issues).
+
+This extension is integrated with core installation of PostgreSQL, and it's easy to install.
+
+```sql
+CREATE EXTENSION pg_buffercache;
+```
-## pg_cron
+### pg_cron
[pg_cron](https://github.com/citusdata/pg_cron/) is a simple, cron-based job scheduler for PostgreSQL that runs inside the database as an extension. The `pg_cron` extension can be used to run scheduled maintenance tasks within a PostgreSQL database. For example, you can run periodic vacuum of a table or removing old data jobs.
-`pg_cron` can run multiple jobs in parallel, but it runs at most one instance of a job at a time. If a second run is supposed to start before the first one finishes, then the second run is queued and started as soon as the first run completes. This ensures that jobs run exactly as many times as scheduled and don't run concurrently with themselves.
+`pg_cron` can run multiple jobs in parallel, but it runs at most one instance of a job at a time. If a second run is supposed to start before the first one finishes, then the second run is queued and started as soon as the first run completes. In such way, it is ensured that jobs run exactly as many times as scheduled and don't run concurrently with themselves.
Some examples:
To update or change the database name for the existing schedule
SELECT cron.alter_job(job_id:=MyJobID,database:='NewDBName'); ```
-## pg_failover_slots (preview)
+### pg_failover_slots (preview)
The PG Failover Slots extension enhances Azure Database for PostgreSQL flexible server when operating with both logical replication and high availability enabled servers. It effectively addresses the challenge within the standard PostgreSQL engine that doesn't preserve logical replication slots after a failover. Maintaining these slots is critical to prevent replication pauses or data mismatches during primary server role changes, ensuring operational continuity and data integrity.
The extension is supported for PostgreSQL versions 11 to 16.
You can find more information and how to use the PG Failover Slots extension on its [GitHub page](https://github.com/EnterpriseDB/pg_failover_slots).
-### Enable pg_failover_slots
+#### Enable pg_failover_slots
To enable the PG Failover Slots extension for your Azure Database for PostgreSQL flexible server instance, you need to modify the server's configuration by including the extension in the server's shared preload libraries and adjusting a specific server parameter. Here's the process:
To enable the PG Failover Slots extension for your Azure Database for PostgreSQL
Any changes to the `shared_preload_libraries` parameter require a server restart to take effect.
-Follow these steps in the Azure portal:
+Using the [Azure portal](https://portal.azure.com):
-1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your Azure Database for PostgreSQL flexible server instance's page.
-1. In the menu on the left, select **Server parameters**.
-1. Find the `shared_preload_libraries` parameter in the list and edit its value to include `pg_failover_slots`.
+1. Select your Azure Database for PostgreSQL flexible server instance.
+1. From the resource menu, under **Settings** section, select **Server parameters**.
+1. Search for the `shared_preload_libraries` parameter and edit its value to include `pg_failover_slots`.
1. Search for the `hot_standby_feedback` parameter and set its value to `on`.
-1. Select on **Save** to preserve your changes. Now, you'll have the option to **Save and restart**. Choose this to ensure that the changes take effect since modifying `shared_preload_libraries` requires a server restart.
+1. Select on **Save** to preserve your changes. Now, you have the option to **Save and restart**. Choose this to ensure that the changes take effect, since modifying `shared_preload_libraries` requires a server restart.
+
+By selecting **Save and restart**, your server automatically reboots, applying the changes just made. Once the server is back online, the PG Failover Slots extension is enabled and operational on your primary Azure Database for PostgreSQL flexible server instance, ready to handle logical replication slots during failovers.
+
+### pg_hint_plan
+
+`pg_hint_plan` makes it possible to tweak PostgreSQL execution plans using so-called "hints" in SQL comments, like:
+
+```sql
+/*+ SeqScan(a) */
+```
+`pg_hint_plan` reads hinting phrases in a comment of special form given with the target SQL statement. The special form is beginning by the character sequence "/\*+" and ends with "\*/". Hint phrases consist of hint name and following parameters enclosed by parentheses and delimited by spaces. New lines for readability can delimit each hinting phrase.
+
+Example:
+
+```sql
+ /*+
+ HashJoin(a b)
+ SeqScan(a)
+ */
+ SELECT *
+ FROM pgbench_branches b
+ JOIN pgbench_accounts an ON b.bid = a.bid
+ ORDER BY a.aid;
+```
+The previous example causes the planner to use the results of a `seq scan` on the table a to be combined with table b as a `hash join`.
+
+To install pg_hint_plan, in addition, to allow listing it, as shown in [how to use PostgreSQL extensions](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
+
+Using the [Azure portal](https://portal.azure.com):
+
+1. Select your Azure Database for PostgreSQL flexible server instance.
+1. From the resource menu, under **Settings** section, select **Server parameters**.
+1. Search for the `shared_preload_libraries` parameter and edit its value to include `pg_hint_plan`.
+1. Select on **Save** to preserve your changes. Now, you have the option to **Save and restart**. Choose this to ensure that the changes take effect, since modifying `shared_preload_libraries` requires a server restart.
+You can now enable pg_hint_plan your Azure Database for PostgreSQL flexible server database. Connect to the database and issue the following command:
+
+```sql
+CREATE EXTENSION pg_hint_plan;
+```
+
+### pg_prewarm
+
+The `pg_prewarm` extension loads relational data into cache. Prewarming your caches means that your queries have better response times on their first run after a restart. The auto-prewarm functionality isn't currently available in Azure Database for PostgreSQL flexible server.
+
+### pg_repack
+
+A typical question people ask when they first try to use this extension is: Is pg_repack an extension or a client-side executable like psql or pg_dump?
+
+The answer to that is that it is actually both. [pg_repack/lib](https://github.com/reorg/pg_repack/tree/master/lib) holds the code for the extension, including the schema and SQL artifacts it creates, and the C library implementing the code of several of those functions. On the other hand, [pg_repack/bin](https://github.com/reorg/pg_repack/tree/master/bin) keeps the code for the client application, which knows how to interact with the programmability artifacts created by the extension. This client application aims to ease the complexity of interacting with the different interfaces surfaced by the server-side extension, by means of offering the user some command-line options which are easier to understand. The client application without the extension created on the database it is pointed to, is useless. The server-side extension on its own would be fully functional, but would require the user to understand a complicated interaction pattern consisting on executing queries to retrieve data that is used as input to functions implemented by the extension.
+
+### Permission denied for schema repack
+
+As of now, because of the way in which we grant permissions to the repack schema created by this extension, it is only supported to run pg_repack functionality from the context of `azure_pg_admin`.
+
+You may notice that if the owner of a table, who is not `azure_pg_admin`, tries to run pg_repack, they end up receiving an error like the following:
+
+```
+NOTICE: Setting up workers.conns
+ERROR: pg_repack failed with error: ERROR: permission denied for schema repack
+LINE 1: select repack.version(), repack.version_sql()
+```
-By selecting **Save and restart**, your server will automatically reboot, applying the changes you've made. Once the server is back online, the PG Failover Slots extension is enabled and operational on your primary Azure Database for PostgreSQL flexible server instance, ready to handle logical replication slots during failovers.
+To avoid that error, make sure you run pg_repack from the context of `azure_pg_admin`.
-## pg_stat_statements
+### pg_stat_statements
The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) gives you a view of all the queries that have run on your database. That is useful to get an understanding of what your query workload performance looks like on a production system.
The setting `pg_stat_statements.track`, which controls what statements are count
There's a tradeoff between the query execution information `pg_stat_statements` provides and the impact on server performance as it logs each SQL statement. If you aren't actively using the `pg_stat_statements` extension, we recommend that you set `pg_stat_statements.track` to `none`. Some third-party monitoring services might rely on `pg_stat_statements` to deliver query performance insights, so confirm whether this is the case for you or not.
-## TimescaleDB
+### postgres_fdw
-TimescaleDB is a time-series database that is packaged as an extension for PostgreSQL. TimescaleDB provides time-oriented analytical functions, optimizations, and scales Postgres for time-series workloads.
-[Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of Timescale, Inc. Azure Database for PostgreSQL flexible server provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses).
-### Install TimescaleDB
+[postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allows you to connect from one Azure Database for PostgreSQL flexible server instance to another, or to another database in the same server. Azure Database for PostgreSQL flexible server supports both incoming and outgoing connections to any PostgreSQL server. The sending server needs to allow outbound connections to the receiving server. Similarly, the receiving server needs to allow connections from the sending server.
-To install TimescaleDB, in addition, to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
-Using the [Azure portal](https://portal.azure.com/):
+We recommend deploying your servers with [virtual network integration](concepts-networking.md) if you plan to use this extension. By default virtual network integration allows connections between servers in the virtual network. You can also choose to use [virtual network network security groups](../../virtual-network/manage-network-security-group.md) to customize access.
-1. Select your Azure Database for PostgreSQL flexible server instance.
+### TimescaleDB
-1. On the sidebar, select **Server Parameters**.
+TimescaleDB is a time-series database that is packaged as an extension for PostgreSQL. TimescaleDB provides time-oriented analytical functions, optimizations, and scales Postgres for time-series workloads.
+[Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of Timescale, Inc. Azure Database for PostgreSQL flexible server provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses).
-1. Search for the `shared_preload_libraries` parameter.
+#### Install TimescaleDB
-1. Select **TimescaleDB**.
+To install TimescaleDB, in addition, to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
-1. Select **Save** to preserve your changes. You get a notification once the change is saved.
-1. After the notification, **restart** the server to apply these changes.
+Using the [Azure portal](https://portal.azure.com):
+1. Select your Azure Database for PostgreSQL flexible server instance.
+1. From the resource menu, under **Settings** section, select **Server parameters**.
+1. Search for the `shared_preload_libraries` parameter and edit its value to include `TimescaleDB`.
+1. Select on **Save** to preserve your changes. Now, you have the option to **Save and restart**. Choose this to ensure that the changes take effect, since modifying `shared_preload_libraries` requires a server restart.
You can now enable TimescaleDB in your Azure Database for PostgreSQL flexible server database. Connect to the database and issue the following command: ```sql
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
You can now create a TimescaleDB hypertable [from scratch](https://docs.timescale.com/getting-started/creating-hypertables) or migrate [existing time-series data in PostgreSQL](https://docs.timescale.com/getting-started/migrating-data).
-### Restore a Timescale database using pg_dump and pg_restore
+#### Restore a Timescale database using pg_dump and pg_restore
To restore a Timescale database using pg_dump and pg_restore, you must run two helper procedures in the destination database: `timescaledb_pre_restore()` and `timescaledb_post restore()`. First, prepare the destination database: ```SQLcreate the new database where you'll perform the restore
+--create the new database where you want to perform the restore
CREATE DATABASE tutorial; \c tutorial --connect to the database CREATE EXTENSION timescaledb;
SELECT timescaledb_post_restore();
For more details on restore method with Timescale enabled database, see [Timescale documentation](https://docs.timescale.com/timescaledb/latest/how-to-guides/backup-and-restore/pg-dump-and-restore/#restore-your-entire-database-from-backup).
-### Restore a Timescale database using timescaledb-backup
+#### Restore a Timescale database using timescaledb-backup
While running `SELECT timescaledb_post_restore()` procedure listed above you might get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to back up and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant. To do so, you should do following
To do so, you should do following
1. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup).
-> [!NOTE]
-> When using `timescale-backup` utilities to restore to Azure, since database user names for Azure Database for PostgreSQL single server must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
-
-## pg_hint_plan
-
-`pg_hint_plan` makes it possible to tweak PostgreSQL execution plans using so-called "hints" in SQL comments, like:
-
-```sql
-/*+ SeqScan(a) */
-```
-`pg_hint_plan` reads hinting phrases in a comment of special form given with the target SQL statement. The special form is beginning by the character sequence "/\*+" and ends with "\*/". Hint phrases consist of hint name and following parameters enclosed by parentheses and delimited by spaces. New lines for readability can delimit each hinting phrase.
-
-Example:
-
-```sql
- /*+
- HashJoin(a b)
- SeqScan(a)
- */
- SELECT *
- FROM pgbench_branches b
- JOIN pgbench_accounts an ON b.bid = a.bid
- ORDER BY a.aid;
-```
-The above example causes the planner to use the results of a `seq scan` on the table a to be combined with table b as a `hash join`.
-
-To install pg_hint_plan, in addition, to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
-Using the [Azure portal](https://portal.azure.com/):
-
-1. Select your Azure Database for PostgreSQL flexible server instance.
-
-1. On the sidebar, select **Server Parameters**.
-
-1. Search for the `shared_preload_libraries` parameter.
-
-1. Select **pg_hint_plan**.
-
-1. Select **Save** to preserve your changes. You get a notification once the change is saved.
-
-1. After the notification, **restart** the server to apply these changes.
-
-You can now enable pg_hint_plan your Azure Database for PostgreSQL flexible server database. Connect to the database and issue the following command:
-
-```sql
-CREATE EXTENSION pg_hint_plan;
-```
-
-## pg_buffercache
-
-`Pg_buffercache` can be used to study the contents of *shared_buffers*. Using [this extension](https://www.postgresql.org/docs/current/pgbuffercache.html) you can tell if a particular relation is cached or not (in `shared_buffers`). This extension can help you troubleshooting performance issues (caching related performance issues).
-
-This is part of contrib, and it's easy to install this extension.
-
-```sql
-CREATE EXTENSION pg_buffercache;
-```
## Extensions and Major Version Upgrade
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
description: This article describes the scheduled maintenance feature in Azure D
Previously updated : 04/27/2024 Last updated : 06/27/2024 -+ # Scheduled maintenance in Azure Database for PostgreSQL - Flexible Server
Azure Database for PostgreSQL flexible server performs periodic maintenance to help keep your managed database secure, stable, and up to date. During maintenance, the server gets new features, updates, and patches.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Avoid all server operations (modifications, configuration changes, starting/stopping the server) during Azure Database for PostgreSQL flexible server maintenance. Engaging in these activities can lead to unpredictable outcomes and possibly affect server performance and stability. Wait until maintenance concludes before you conduct server operations. ## Select a maintenance window
The system sends maintenance notifications 5 days in advance so that you have am
Notifications about upcoming scheduled maintenance can be:
-* Emailed to a specific address.
-* Emailed to an Azure Resource Manager role.
-* Sent in a text message to mobile devices.
-* Pushed as a notification to an Azure app.
-* Delivered as a voice message.
+- Emailed to a specific address.
+- Emailed to an Azure Resource Manager role.
+- Sent in a text message to mobile devices.
+- Pushed as a notification to an Azure app.
+- Delivered as a voice message.
When you're specifying preferences for the maintenance schedule, you can choose a day of the week and a time window. If you don't specify a time window, the system chooses times between 11:00 PM and 7:00 AM in your server region's time. You can define different schedules for each Azure Database for PostgreSQL flexible server instance in your Azure subscription.
You can update schedule settings at any time. If maintenance is scheduled for yo
## System-managed vs. custom maintenance schedules
-You can define a system-managed schedule or a custom schedule for each Azure Database for PostgreSQL flexible server instance in your Azure subscription:
+You can define a system-managed schedule or a custom schedule for each Azure Database for PostgreSQL flexible server instance in your Azure subscription:
-* With a system-managed schedule, the system chooses any 1-hour window between 11:00 PM and 7:00 AM in your server region's time.
-* With a custom schedule, you can specify your maintenance window for the server by choosing the day of the week and a 1-hour time window.
+- With a system-managed schedule, the system chooses any one hour window between 11:00 PM and 7:00 AM in your server region's time.
+- With a custom schedule, you can specify your maintenance window for the server by choosing the day of the week and a one hour time window.
-Updates are first applied to servers with system-managed schedules, followed by servers with custom schedules after at least 7 days within a region. To receive early updates for development and test servers, use a system-managed schedule. This choice allows early testing and issue resolution before updates reach production servers with custom schedules.
+Updates are first applied to servers with system-managed schedules, followed by servers with custom schedules after at least seven days within a region. To receive early updates for development and test servers, use a system-managed schedule. This choice allows early testing and issue resolution before updates reach production servers with custom schedules.
-Updates for custom-schedule servers begin 7 days later, during a defined maintenance window. After you're notified, you can't defer updates. We advise that you use custom schedules for production environments only.
+Updates for custom-schedule servers begin seven days later, during a defined maintenance window. After you're notified, you can't defer updates. We advise that you use custom schedules for production environments only.
In rare cases, maintenance events can be canceled by the system or fail to finish successfully. If an update fails, it's reverted, and the previous version of the binaries is restored. The server might still restart during the maintenance window.
-If an update is canceled or failed, the system creates a notification about the canceled or failed maintenance event. The next attempt to perform maintenance is scheduled according to your current schedule settings, and you receive a notification about it 5 days in advance.
+If an update is canceled or failed, the system creates a notification about the canceled or failed maintenance event. The next attempt to perform maintenance is scheduled according to your current schedule settings, and you receive a notification about it five days in advance.
+
+## Consideration and limitations
+
+Some considerations when considering during monthly maintenance.
+
+- Monthly maintenance is impactful and they involve some downtime.
+ - Downtime depends on the transactional load on the server at the time of maintenance.
-## Next steps
+## Related content
-* Learn how to [change the maintenance schedule](how-to-maintenance-portal.md).
-* Learn how to [get notifications about upcoming maintenance](../../service-health/service-notifications.md) by using Azure Service Health.
-* Learn how to [set up alerts for upcoming scheduled maintenance events](../../service-health/resource-health-alert-monitor-guide.md).
+- [change the maintenance schedule](how-to-maintenance-portal.md)
+- [get notifications about upcoming maintenance](../../service-health/service-notifications.md)
+- [set up alerts for upcoming scheduled maintenance events](../../service-health/resource-health-alert-monitor-guide.md)
postgresql Concepts Networking Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private.md
Title: Networking overview with private access (VNET)
-description: Learn about connectivity and networking options for Azure Database for PostgreSQL - Flexible Server with private access (VNET).
+ Title: Networking overview with private access (virtual network)
+description: Learn about connectivity and networking options for Azure Database for PostgreSQL - Flexible Server with private access (virtual network).
Previously updated : 04/27/2024 Last updated : 06/27/2024 -+
-# Networking overview for Azure Database for PostgreSQL - Flexible Server with private access (VNET Integration)
+# Network with private access (virtual network Integration) for Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)] This article describes connectivity and networking concepts for Azure Database for PostgreSQL flexible server.
-When you create an Azure Database for PostgreSQL flexible server instance, you must choose one of the following networking options: **Private access (VNet integration)** or **Public access (allowed IP addresses) and Private Endpoint**. This document will describe **Private access (VNet integration)** networking option.
+When you create an Azure Database for PostgreSQL flexible server instance, you must choose one of the following networking options: **Private access (VNet integration)** or **Public access (allowed IP addresses) and Private Endpoint**. This document describes **Private access (VNet integration)** networking option.
-## Private access (VNet integration)
+## Private access (virtual network integration)
-You can deploy an Azure Database for PostgreSQL flexible server instance into your [Azure virtual network (VNet)](../../virtual-network/virtual-networks-overview.md) using **[VNET injection](../../virtual-network/virtual-network-for-azure-services.md)**. Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through **private IP addresses** that were assigned on this network.
+You can deploy an Azure Database for PostgreSQL flexible server instance into your [Azure virtual network (virtual network)](../../virtual-network/virtual-networks-overview.md) using **[VNET injection](../../virtual-network/virtual-network-for-azure-services.md)**. Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through **private IP addresses** that were assigned on this network.
Choose this networking option if you want the following capabilities:
-* Connect from Azure resources in the same virtual network to your Azure Database for PostgreSQL flexible server instance by using private IP addresses.
-* Use VPN or Azure ExpressRoute to connect from non-Azure resources to your Azure Database for PostgreSQL flexible server instance.
-* Ensure that the Azure Database for PostgreSQL flexible server instance has no public endpoint that's accessible through the internet.
+- Connect from Azure resources in the same virtual network to your Azure Database for PostgreSQL flexible server instance by using private IP addresses.
+- Use VPN or Azure ExpressRoute to connect from non-Azure resources to your Azure Database for PostgreSQL flexible server instance.
+- Ensure that the Azure Database for PostgreSQL flexible server instance has no public endpoint that's accessible through the internet.
In the preceding diagram: - Azure Databases for PostgreSQL flexible server instances are injected into subnet 10.0.1.0/24 of the VNet-1 virtual network.
In the preceding diagram:
An Azure virtual network contains a private IP address space that's configured for your use. Your virtual network must be in the same Azure region as your Azure Database for PostgreSQL flexible server instance. To learn more about virtual networks, see the [Azure Virtual Network overview](../../virtual-network/virtual-networks-overview.md).
-Here are some concepts to be familiar with when you're using virtual networks where resources are [integrated into virtual network](../../virtual-network/virtual-network-for-azure-services.md) with Azure Database for PostgreSQL flexible server instances:
+Here are some concepts to be familiar with when you're using virtual networks where resources are [integrated into virtual network](../../virtual-network/virtual-network-for-azure-services.md) with Azure Database for PostgreSQL flexible server instances:
-* **Delegated subnet**. A virtual network contains subnets (subnetworks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
-
- Your VNET integrated Azure Database for PostgreSQL flexible server instance must be in a subnet that's *delegated*. That is, only Azure Database for PostgreSQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as `Microsoft.DBforPostgreSQL/flexibleServers`.
- The smallest CIDR range you can specify for the subnet is /28, which provides 16 IP addresses, however the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that can't be assigned to host, mentioned above. This leaves you 11 available IP addresses for /28 CIDR range, whereas a single Azure Database for PostgreSQL flexible server instance with High Availability features utilizes four addresses.
- For Replication and Microsoft Entra connections, please make sure Route Tables don't affect traffic.A common pattern is routed all outbound traffic via an Azure Firewall or a custom on-premises network filtering appliance.
- If the subnet has a Route Table associated with the rule to route all traffic to a virtual appliance:
- * Add a rule with Destination Service Tag ΓÇ£AzureActiveDirectoryΓÇ¥ and next hop ΓÇ£InternetΓÇ¥
- * Add a rule with Destination IP range same as the Azure Database for PostgreSQL flexible server subnet range and next hop ΓÇ£Virtual NetworkΓÇ¥
+- **Delegated subnet**. A virtual network contains subnets (subnetworks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
+ Your virtual network integrated Azure Database for PostgreSQL flexible server instance must be in a subnet that's *delegated*. That is, only Azure Database for PostgreSQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as `Microsoft.DBforPostgreSQL/flexibleServers`.
+The smallest CIDR range you can specify for the subnet is /28, which provides 16 IP addresses, however the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which includes two IPs that can't be assigned to host, mentioned above. This leaves you 11 available IP addresses for /28 CIDR range, whereas a single Azure Database for PostgreSQL flexible server instance with High Availability features utilizes four addresses.
+For Replication and Microsoft Entra connections, make sure Route Tables don't affect traffic.A common pattern is routed all outbound traffic via an Azure Firewall or a custom on-premises network filtering appliance.
+If the subnet has a Route Table associated with the rule to route all traffic to a virtual appliance:
+ * Add a rule with Destination Service Tag "AzureActiveDirectory" and next hop "Internet"
+ * Add a rule with Destination IP range same as the Azure Database for PostgreSQL flexible server subnet range and next hop "Virtual Network"
> [!IMPORTANT] > The names `AzureFirewallSubnet`, `AzureFirewallManagementSubnet`, `AzureBastionSubnet`, and `GatewaySubnet` are reserved within Azure. Don't use any of these as your subnet name.
-* **Network security group (NSG)**. Security rules in NSGs enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. For more information, see the [NSG overview](../../virtual-network/network-security-groups-overview.md).
+- **Network security group (NSG)**. Security rules in NSGs enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. For more information, see the [NSG overview](../../virtual-network/network-security-groups-overview.md).
Application security groups (ASGs) make it easy to control Layer-4 security by using NSGs for flat networks. You can quickly:
Here are some concepts to be familiar with when you're using virtual networks wh
At this time, we don't support NSGs where an ASG is part of the rule with Azure Database for PostgreSQL flexible server. We currently advise using [IP-based source or destination filtering](../../virtual-network/network-security-groups-overview.md#security-rules) in an NSG.
- > [!IMPORTANT]
- > High availability and other Features of Azure Database for PostgreSQL flexible server require ability to send/receive traffic to **destination port 5432** within Azure virtual network subnet where Azure Database for PostgreSQL flexible server is deployed, as well as to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL flexible server instance within the subnet where it's deployed, **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. You can further [filter](../../virtual-network/tutorial-filter-network-traffic.md) this exception rule by adding your Azure region to the label like *us-east.storage*. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL flexible server instance, allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md).
- > When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md), Azure Database for PostgreSQL flexible server requires ability to send/receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers.
+ High availability and other features of Azure Database for PostgreSQL flexible server require ability to send/receive traffic to **destination port 5432** within Azure virtual network subnet where Azure Database for PostgreSQL flexible server is deployed, and to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL flexible server instance within the subnet where it's deployed, **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. You can further [filter](../../virtual-network/tutorial-filter-network-traffic.md) this exception rule by adding your Azure region to the label like *us-east.storage*. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL flexible server instance, allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md).
+When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md), Azure Database for PostgreSQL flexible server requires ability to send/receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers. The required destination TCP port for Azure Storage is 443.
+
+- **Private DNS zone integration**. Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS zone is linked.
-* **Private DNS zone integration**. Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS zone is linked.
-### Using a private DNS zone
+### Use a private DNS zone
[Azure Private DNS](../../dns/private-dns-overview.md) provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. When using private network access with Azure virtual network, providing the private DNS zone information is **mandatory** in order to be able to do DNS resolution. For new Azure Database for PostgreSQL flexible server instance creation using private network access, private DNS zones need to be used while configuring Azure Database for PostgreSQL flexible server instances with private access. For new Azure Database for PostgreSQL flexible server instance creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring Azure Database for PostgreSQL flexible server instances with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating Azure Database for PostgreSQL flexible server instances, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription.
-If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, **create private DNS zones that end with `.postgres.database.azure.com`**. Use those zones while configuring Azure Database for PostgreSQL flexible server instances with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name **can't** be the name you use for one of your Azure Databases for PostgreSQL flexible server instances or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md).
+If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, **create private DNS zones that end with `.postgres.database.azure.com`. Use those zones while configuring Azure Database for PostgreSQL flexible server instances with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name **can't** be the name you use for one of your Azure Databases for PostgreSQL flexible server instances or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md).
-
-Using Azure portal, API, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone that exists the same or different subscription.
+Using Azure portal, API, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone that exists the same or different subscription.
> [!IMPORTANT] > Ability to change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone is currently disabled for servers with High Availability feature enabled. After you create a private DNS zone in Azure, you need to [link](../../dns/private-dns-virtual-network-links.md) a virtual network to it. Once linked, resources hosted in that virtual network can access the private DNS zone.+ > [!IMPORTANT] > We no longer validate virtual network link presence on server creation for Azure Database for PostgreSQL flexible server with private networking. When creating server through the portal we provide customer choice to create link on server creation via checkbox *"Link Private DNS Zone your virtual network"* in the Azure portal.
The custom DNS server should be inside the virtual network or reachable via the
Private DNS zone settings and virtual network peering are independent of each other. If you want to connect to the Azure Database for PostgreSQL flexible server instance from a client that's provisioned in another virtual network from the same region or a different region, you have to **link** the private DNS zone with the virtual network. For more information, see [Link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network). > [!NOTE]
-> Only private DNS zone names that end with **'postgres.database.azure.com'** can be linked. Your DNS zone name cannot be the same as your Azure Database for PostgreSQL flexible server instance(s) otherwise name resolution will fail.
+> Only private DNS zone names that end with **'postgres.database.azure.com'** can be linked. Your DNS zone name cannot be the same as your Azure Database for PostgreSQL flexible server instance(s) otherwise name resolution will fail.
To map a Server name to the DNS record, you can run *nslookup* command in [Azure Cloud Shell](../../cloud-shell/overview.md) using Azure PowerShell or Bash, substituting name of your server for <server_name> parameter in example below: ```bash nslookup -debug <server_name>.postgres.database.azure.com | grep 'canonical name'- ``` -
-### Using Hub and Spoke private networking design
+### Use Hub and Spoke private networking design
Hub and spoke is a popular networking model for efficiently managing common communication or security requirements.
The spokes are also virtual networks in Azure, used to isolate individual worklo
There are three main patterns for connecting spoke virtual networks to each other:
-* **Spokes directly connected to each other**. Virtual network peerings or VPN tunnels are created between the spoke virtual networks to provide direct connectivity without traversing the hub virtual network.
-* **Spokes communicate over a network appliance**. Each spoke virtual network has a peering to Virtual WAN or to a hub virtual network. An appliance routes traffic from spoke to spoke. The appliance can be managed by Microsoft (as with Virtual WAN) or by you.
-* **Virtual Network Gateway attached to the hub network and make use of User Defined Routes (UDR)**, to enable communication between the spokes.
+- **Spokes directly connected to each other**. Virtual network peerings or VPN tunnels are created between the spoke virtual networks to provide direct connectivity without traversing the hub virtual network.
+- **Spokes communicate over a network appliance**. Each spoke virtual network has a peering to Virtual WAN or to a hub virtual network. An appliance routes traffic from spoke to spoke. The appliance can be managed by Microsoft (as with Virtual WAN) or by you.
+- **Virtual Network Gateway attached to the hub network and make use of User Defined Routes (UDR)**, to enable communication between the spokes.
Use [Azure Virtual Network Manager (AVNM)](../../virtual-network-manager/overview.md) to create new (and onboard existing) hub and spoke virtual network topologies for the central management of connectivity and security controls. ### Communication with privately networked clients in different regions
-Frequently customers have a need to connect to clients different Azure regions. More specifically, this question typically boils down to how to connect two VNETs (one of which has Azure Database for PostgreSQL - Flexible Server and another application client) that are in different regions.
+Frequently customers have a need to connect to clients different Azure regions. More specifically, this question typically boils down to how to connect two VNETs (one of which has Azure Database for PostgreSQL - Flexible Server and another application client) that are in different regions.
There are multiple ways to achieve such connectivity, some of which are:
-* **[Global VNET peering](../../virtual-network/virtual-network-peering-overview.md)**. Most common methodology, as it's the easiest way to connect networks in different regions together. Global VNET peering creates a connection over the Azure backbone directly between the two peered VNETs. This provides best network throughput and lowest latencies for connectivity using this method. When VNETs are peered, Azure will also handle the routing automatically for you, these VNETs can communicate with all resources in the peered VNET, established on a VPN gateway.
-* **[VNET-to-VNET connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)**. A VNET-to-VNET connection is essentially a VPN between the two different Azure locations. The VNET-to-VNET connection is established on a VPN gateway. This means your traffic incurs two additional traffic hops as compared to global VNET peering. There's also additional latency and lower bandwidth as compared to that method.
-* **[Communication via network appliance in Hub and Spoke architecture](#using-hub-and-spoke-private-networking-design)**.
+- **[Global VNET peering](../../virtual-network/virtual-network-peering-overview.md)**. Most common methodology, as it's the easiest way to connect networks in different regions together. Global virtual network peering creates a connection over the Azure backbone directly between the two peered VNETs. This provides best network throughput and lowest latencies for connectivity using this method. When VNETs are peered, Azure will also handle the routing automatically for you, these VNETs can communicate with all resources in the peered virtual network, established on a VPN gateway.
+- **[VNET-to-VNET connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)**. A VIRTUAL NETWORK-to-VIRTUAL NETWORK connection is essentially a VPN between the two different Azure locations. The VIRTUAL NETWORK-to-VIRTUAL NETWORK connection is established on a VPN gateway. This means your traffic incurs two additional traffic hops as compared to global virtual network peering. There's also additional latency and lower bandwidth as compared to that method.
+- Communication via network appliance in Hub and Spoke architecture**.
Instead of connecting spoke virtual networks directly to each other, you can use network appliances to forward traffic between spokes. Network appliances provide more network services like deep packet inspection and traffic segmentation or monitoring, but they can introduce latency and performance bottlenecks if they're not properly sized. ### Replication across Azure regions and virtual networks with private networking Database replication is the process of copying data from a central or primary server to multiple servers known as replicas. The primary server accepts read and write operations whereas the replicas serve read-only transactions. The primary server and replicas collectively form a database cluster. The goal of database replication is to ensure redundancy, consistency, high availability, and accessibility of data, especially in high-traffic, mission-critical applications.
-Azure Database for PostgreSQL flexible server offers two methods for replications: physical (i.e. streaming) via [built -in Read Replica feature](./concepts-read-replicas.md) and [logical replication](./concepts-logical.md). Both are ideal for different use cases, and a user may choose one over the other depending on the end goal.
+Azure Database for PostgreSQL flexible server offers two methods for replications: physical (that is, streaming) via [built -in Read Replica feature](./concepts-read-replicas.md) and [logical replication](./concepts-logical.md). Both are ideal for different use cases, and a user might choose one over the other depending on the end goal.
-Replication across Azure regions, with separate [virtual networks (VNETs)](../../virtual-network/virtual-networks-overview.md) in each region, **requires connectivity across regional virtual network boundaries** that can be provided via **[virtual network peering](../../virtual-network/virtual-network-peering-overview.md)** or in **[Hub and Spoke architectures](#using-hub-and-spoke-private-networking-design) via network appliance**.
+Replication across Azure regions, with separate [virtual networks (VNETs)](../../virtual-network/virtual-networks-overview.md) in each region, **requires connectivity across regional virtual network boundaries** that can be provided via **[virtual network peering](../../virtual-network/virtual-network-peering-overview.md)** or in **Hub and Spoke architectures via network appliance**.
By default **DNS name resolution** is **scoped to a virtual network**. This means that any client in one virtual network (VNET1) is unable to resolve the Azure Database for PostgreSQL flexible server FQDN in another virtual network (VNET2). In order to resolve this issue, you must make sure clients in VNET1 can access the Azure Database for PostgreSQL flexible server Private DNS Zone. This can be achieved by adding a **[virtual network link](../../dns/private-dns-virtual-network-links.md)** to the Private DNS Zone of your Azure Database for PostgreSQL flexible server instance. - ### Unsupported virtual network scenarios
-Here are some limitations for working with virtual networks created via VNET integration:
-
+Here are some limitations for working with virtual networks created via virtual network integration:
-* After an Azure Database for PostgreSQL flexible server instance is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet. You can't move the virtual network into another resource group or subscription.
-* Subnet size (address spaces) can't be increased after resources exist in the subnet.
-* VNET injected resources can't interact with Private Link by default. If you want to use **[Private Link](../../private-link/private-link-overview.md) for private networking, see [Azure Database for PostgreSQL flexible server networking with Private Link](./concepts-networking-private-link.md)**
+- After an Azure Database for PostgreSQL flexible server instance is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet. You can't move the virtual network into another resource group or subscription.
+- Subnet size (address spaces) can't be increased after resources exist in the subnet.
+- Virtual network injected resources can't interact with Private Link by default. If you want to use **[Private Link](../../private-link/private-link-overview.md) for private networking, see [Azure Database for PostgreSQL flexible server networking with Private Link](./concepts-networking-private-link.md)**
> [!IMPORTANT]
-> Azure Resource Manager supports the ability to **lock** resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: **CanNotDelete** and **ReadOnly**. These lock types can be applied either to a Private DNS zone, or to an individual record set. **Applying a lock of either type against Private DNS Zone or individual record set may interfere with the ability of Azure Database for PostgreSQL flexible server to update DNS records** and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. For these reasons, please make sure you are **not** utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL flexible server.
+> Azure Resource Manager supports the ability to **lock** resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: **CanNotDelete** and **ReadOnly**. These lock types can be applied either to a Private DNS zone, or to an individual record set. **Applying a lock of either type against Private DNS Zone or individual record set might interfere with the ability of Azure Database for PostgreSQL flexible server to update DNS records** and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. For these reasons, make sure you are **not** utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL flexible server.
## Host name
Regardless of the networking option that you choose, we recommend that you alway
An example that uses an FQDN as a host name is `hostname = servername.postgres.database.azure.com`. Where possible, avoid using `hostname = 10.0.0.4` (a private address) or `hostname = 40.2.45.67` (a public address).
-## Next steps
+## Related content
-* Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
+- [Azure portal](how-to-manage-virtual-network-portal.md)
+- [Azure CLI](how-to-manage-virtual-network-cli.md)
postgresql Concepts Networking Ssl Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-ssl-tls.md
To update client applications in certificate pinning scenarios, you can download
To import certificates to client certificate stores you may have to **convert certificate .crt files to .pem format**, after downloading certificate files from URIs above. You can use OpenSSL utility to do these file conversions, as shown in example below: ```powershell
-openssl x509 -in certificate.crt -out certificate.pem -outform PEM
+openssl x509 -inform DER -in certificate.crt -out certificate.pem -outform PEM
``` **Detailed information on updating client applications certificate stores with new Root CA certificates has been documented in this [how-to document](../flexible-server/how-to-update-client-certificates-java.md)**.
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Title: PgBouncer in Azure Database for PostgreSQL - Flexible Server
description: This article provides an overview of the built-in PgBouncer feature. - Previously updated : 05/22/2024 Last updated : 06/27/2024
Using an application-side pool together with PgBouncer on the database server ca
* If PgBouncer is deployed as a feature, it becomes a potential single point of failure. If the PgBouncer feature is down, it can disrupt the entire database connection pool and cause downtime for the application. To mitigate the single point of failure, you can set up multiple PgBouncer instances behind a load balancer for high availability on Azure VMs.
+* Token Size Restriction with AAD Authentication - Users with a large number of group memberships wonΓÇÖt be able to connect through PgBouncer due to a token size restriction. Applications, services, and users with a small number of groups work.
+ * PgBouncer is a lightweight application that uses a single-threaded architecture. This design is great for most application workloads. But in applications that create a large number of short-lived connections, this design might affect pgBouncer performance and limit your ability to scale your application. You might need to try one of these approaches: * Distribute the connection load across multiple PgBouncer instances on Azure VMs.
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-query-guide.md
The following document includes links to examples showing how to connect and que
| Quickstart | Description | ||| |[Pgadmin](https://www.pgadmin.org/)|You can use pgadmin to connect to the server and it simplifies the creation, maintenance and use of database objects.|
-|[psql in Azure Cloud Shell](./quickstart-create-server-cli.md#connect-using-postgresql-command-line-client)|This article shows how to run [**psql**](https://www.postgresql.org/docs/current/static/app-psql.html) in [Azure Cloud Shell](../../cloud-shell/overview.md) to connect to your server and then run statements to query, insert, update, and delete data in the database.You can run **psql** if installed on your development environment|
+|[psql in Azure Cloud Shell](./quickstart-create-server-cli.md#connect-using-postgresql-command-line-client)|This article shows how to run [**psql**](https://www.postgresql.org/docs/current/static/app-psql.html) in [Azure Cloud Shell](../../cloud-shell/overview.md) to connect to your server and then run statements to query, insert, update, and delete data in the database. You can run **psql** if installed on your development environment|
|[Python](connect-python.md)|This quickstart demonstrates how to use Python to connect to a database and use work with database objects to query data. | |[Django with App Service](/azure/app-service/tutorial-python-postgresql-app)|This tutorial demonstrates how to use Ruby to create a program to connect to a database and use work with database objects to query data.|
Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or
Azure Database for PostgreSQL flexible server provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions function like built-in features. - [Postgres extensions](./concepts-extensions.md#extension-versions)-- [dblink and postgres_fdw](./concepts-extensions.md#dblink-and-postgres_fdw)
+- [dblink](./concepts-extensions.md#dblink)
+- [postgres_fdw](./concepts-extensions.md#postgres_fdw)
- [pg_prewarm](./concepts-extensions.md#pg_prewarm) - [pg_stat_statements](./concepts-extensions.md#pg_stat_statements)
postgresql Quickstart Create Server Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-arm-template.md
Azure Database for PostgreSQL flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use an Azure Resource Manager template (ARM template) to provision an Azure Database for PostgreSQL flexible server instance to deploy multiple servers or multiple databases on a server. Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment. To learn about Azure Resource Manager templates, see [Template deployment overview](../../azure-resource-manager/templates/overview.md).
postgresql Quickstart Create Server Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md
In this quickstart, you learn how to use a Bicep file to create an Azure Databas
Azure Database for PostgreSQL flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Bicep to provision an Azure Database for PostgreSQL flexible server instance to deploy multiple servers or multiple databases on a server. ## Prerequisites
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-aks-database.md
In this quickstart, you deploy a Django application on Azure Kubernetes Service
## Pre-requisites - Launch [Azure Cloud Shell](https://shell.azure.com) in new browser window. You can [install Azure CLI](/cli/azure/install-azure-cli#install) on your local machine too. If you're using a local install, login with Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command. To finish the authentication process, follow the steps displayed in your terminal. - Run [az version](/cli/azure/reference-index?#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az-upgrade). This article requires the latest version of Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed.
postgresql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-change-server-configuration.md
Last updated 01/26/2022
This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for PostgreSQL flexible server, and sets the *log_retention_days* to a value that is other than the default one. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script lists all available configuration parameters as well as t
## Clean up deployment ```azurecli az group delete --name $resourceGroup
postgresql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-and-firewall-rule.md
Last updated 01/26/2022
This sample CLI script creates an Azure Database for PostgreSQL flexible server instance and configures a server-level firewall rule. Once the script has been successfully run, the Azure Database for PostgreSQL flexible server instance can be accessed from all Azure services and the configured IP address. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script creates an Azure Database for PostgreSQL flexible server
## Clean up deployment ```azurecli az group delete --name $resourceGroup
postgresql Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-with-vnet-rule.md
Last updated 01/26/2022
This sample CLI script creates an Azure Database for PostgreSQL flexible server instance and configures a vNet rule. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script creates an Azure Database for PostgreSQL flexible server
## Clean up resources ```azurecli az group delete --name $resourceGroup
postgresql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-point-in-time-restore.md
Last updated 02/11/2022
This sample CLI script restores a single Azure Database for PostgreSQL flexible server instance to a previous point in time. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
This sample CLI script restores a single Azure Database for PostgreSQL flexible
## Clean up deployment ```azurecli az group delete --name $resourceGroup
postgresql Sample Scale Server Up Or Down https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-scale-server-up-or-down.md
This sample CLI script scales compute and storage for a single Azure Database fo
> [!IMPORTANT] > Storage can only be scaled up, not down. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script scales compute and storage for a single Azure Database fo
## Clean up deployment ```azurecli az group delete --name $resourceGroup
postgresql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-server-logs.md
Last updated 01/26/2022
This sample CLI script enables and downloads the slow query logs of a single Azure Database for PostgreSQL flexible server instance. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This sample CLI script enables and downloads the slow query logs of a single Azu
## Clean up deployment ```azurecli az group delete --name $resourceGroup
postgresql How To Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-privatelink-cli.md
To step through this how-to guide, you need:
- An [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md). If you decide to install and use Azure CLI locally instead, this quickstart requires you to use Azure CLI version 2.0.28 or later. To find your installed version, run `az --version`. See [Install Azure CLI](/cli/azure/install-azure-cli) for install or upgrade info.
postgresql How To Configure Server Parameters Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-configure-server-parameters-using-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. ## List server configuration parameters for Azure Database for PostgreSQL server
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-server-cli.md
az account set --subscription <subscription id>
If you have not already created a server, refer to this [quickstart](quickstart-create-server-database-azure-cli.md) to create one. ## Scale compute and storage
postgresql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-vnet-using-cli.md
Last updated 06/24/2022
Virtual Network (VNet) services endpoints and rules extend the private address space of a Virtual Network to your Azure Database for PostgreSQL server. Using convenient Azure CLI commands, you can create, update, delete, list, and show VNet service endpoints and rules to manage your server. For an overview of Azure Database for PostgreSQL VNet service endpoints, including limitations, see [Azure Database for PostgreSQL Server VNet service endpoints](concepts-data-access-and-security-vnet.md). VNet service endpoints are available in all supported regions for Azure Database for PostgreSQL. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
VNets and Azure service resources can be in the same or different subscriptions.
## Sample script ### Run the script
VNets and Azure service resources can be in the same or different subscriptions.
## Clean up deployment ```azurecli echo "Cleaning up resources by removing the resource group..."
postgresql How To Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-read-replicas-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. > [!IMPORTANT] > The read replica feature is only available for Azure Database for PostgreSQL servers in the General
postgresql How To Restart Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. ## Restart the server
postgresql How To Restore Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-powershell.md
To complete this how-to guide, you need:
If you choose to use PowerShell locally, connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. ## Set backup configuration
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-arm-template.md
Last updated 06/24/2022
Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Database for PostgreSQL - single server in the Azure portal, PowerShell, or Azure CLI. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
postgresql Quickstart Create Postgresql Server Database Using Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-azure-powershell.md
If this is your first time using the Azure Database for PostgreSQL service, you
Register-AzResourceProvider -ProviderNamespace Microsoft.DBforPostgreSQL ``` If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription ID using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
postgresql Quickstart Create Postgresql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-bicep.md
Last updated 06/24/2022
Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. In this quickstart, you use Bicep to create an Azure Database for PostgreSQL - single server in Azure CLI or PowerShell. ## Prerequisites
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-azure-cli.md
This quickstart shows how to use [Azure CLI](/cli/azure/get-started-with-azure-c
> [!TIP] > Consider using the simpler [az postgres up](/cli/azure/postgres#az-postgres-up) Azure CLI command. Try out the [quickstart](./quickstart-create-server-up-azure-cli.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Set parameter values
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-up-azure-cli.md
Last updated 06/24/2022
Azure Database for PostgreSQL is a managed service that enables you to run, manage, and scale highly available PostgreSQL databases in the cloud. The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the [az postgres up](/cli/azure/postgres#az-postgres-up) command to create an Azure Database for PostgreSQL server using the Azure CLI. In addition to creating the server, the `az postgres up` command creates a sample database, a root user in the database, opens the firewall for Azure services, and creates default firewall rules for the client computer. These defaults help to expedite the development process. ## Create an Azure Database for PostgreSQL server [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] Install the [db-up](/cli/azure/mysql) extension. If an error is returned, ensure you have installed the latest version of the Azure CLI. See [Install Azure CLI](/cli/azure/install-azure-cli).
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-azure-cli.md
In this tutorial, you use Azure CLI (command-line interface) and other utilities
> * Update data > * Restore data [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Set parameter values
postgresql Tutorial Design Database Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-powershell.md
If this is your first time using the Azure Database for PostgreSQL service, you
Register-AzResourceProvider -ProviderNamespace Microsoft.DBforPostgreSQL ``` If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription ID using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
private-5g-core Configure Service Sim Policy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-sim-policy-arm-template.md
*Services* and *SIM policies* are the key components of Azure Private 5G Core's customizable policy control, which allows you to provide flexible traffic handling. You can determine exactly how your packet core instance applies quality of service (QoS) characteristics to service data flows (SDFs) to meet your deployment's needs. For more information, see [Policy control](policy-control.md). In this how-to guide, you'll learn how to use an Azure Resource Manager template (ARM template) to create a simple service and SIM policy. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
zone_pivot_groups: ase-pro-version
Azure Private 5G Core private mobile networks include one or more *sites*. Each site represents a physical enterprise location (for example, Contoso Corporation's Chicago factory) containing an Azure Stack Edge device that hosts a packet core instance. In this how-to guide, you'll learn how to create a site in your private mobile network using an Azure Resource Manager template (ARM template). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
private-5g-core Create Slice Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-slice-arm-template.md
In this how-to guide, you'll learn how to create a slice in your private mobile network using an Azure Resource Manager template (ARM template). You can configure a slice/service type (SST) and slice differentiator (SD) for slices associated with SIMs that will be provisioned on a 5G site. If a SIM is provisioned on a 4G site, the slice associated with its SIM policy must contain an empty SD and a value of 1 for the SST. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
Azure Private 5G Core is an Azure cloud service for deploying and managing 5G co
- The default service and allow-all SIM policy (as described in [Default service and allow-all SIM policy](default-service-sim-policy.md)). - Optionally, one or more SIMs, and a SIM group. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
private-5g-core Deploy Private Mobile Network With Site Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-command-line.md
Azure Private 5G Core is an Azure cloud service for deploying and managing 5G co
- [az mobile network sim create](/cli/azure/mobile-network/sim#az-mobile-network-sim-create) - [az mobile-network attached-data-network create](/cli/azure/mobile-network/attached-data-network#az-mobile-network-attached-data-network-create) ## Deploy a private mobile network, site and SIM
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
*SIM resources* represent physical SIMs or eSIMs used by user equipment (UEs) served by the private mobile network. In this how-to guide, you'll learn how to provision new SIMs for an existing private mobile network using an Azure Resource Manager template (ARM template). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
Each Azure Private 5G Core site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC). You'll need to periodically upgrade your packet core instances to get access to the latest Azure Private 5G Core features and maintain support for your private mobile network. In this how-to guide, you'll learn how to upgrade a packet core instance using an Azure Resource Manager template (ARM template). If your deployment contains multiple sites, we recommend upgrading the packet core in a single site first and ensuring the upgrade is successful before upgrading the packet cores in the remaining sites.
private-link Create Private Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-bicep.md
In this quickstart, you'll use Bicep to create a private endpoint. You can also create a private endpoint by using the [Azure portal](create-private-endpoint-portal.md), [Azure PowerShell](create-private-endpoint-powershell.md), the [Azure CLI](create-private-endpoint-cli.md), or an [Azure Resource Manager Template](create-private-endpoint-template.md).
private-link Create Private Endpoint Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-cli.md
az group create \
A virtual network and subnet is required for to host the private IP address for the private endpoint. You create a bastion host to connect securely to the virtual machine to test the private endpoint. You create the virtual machine in a later section. >[!NOTE]
->[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+>[!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
Create a virtual network with **[az network vnet create](/cli/azure/network/vnet#az-network-vnet-create)**.
az vm create \
>[!NOTE] >Virtual machines in a virtual network with a bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in bastion hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md). ## Test connectivity to the private endpoint
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
You can create private endpoints for various Azure services, such as Azure SQL a
Sign in to the [Azure portal](https://portal.azure.com). ## Create a private endpoint
Next, you create a private endpoint for the web app that you created in the **Pr
1. Select **Create**. ## Test connectivity to the private endpoint
Use the virtual machine that you created earlier to connect to the web app acros
1. Close the connection to **vm-1**. ## Next steps
private-link Create Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-powershell.md
New-AzResourceGroup @rg
Azure Bastion uses your browser to connect to VMs in your virtual network over secure shell (SSH) or remote desktop protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Azure Bastion, see [Azure Bastion](/azure/bastion/bastion-overview). >[!NOTE]
->[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+>[!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
1. Configure an Azure Bastion subnet for your virtual network. This subnet is reserved exclusively for Azure Bastion resources and must be named **AzureBastionSubnet**.
New-AzVM -ResourceGroupName 'test-rg' -Location 'eastus2' -VM $vmConfig
>[!NOTE] >Virtual machines in a virtual network with a bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in bastion hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](../virtual-network/ip-services/remove-public-ip-address-vm.md). ## Test connectivity to the private endpoint
private-link Create Private Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-template.md
In this quickstart, you'll use an Azure Resource Manager template (ARM template) to create a private endpoint. You can also create a private endpoint by using the [Azure portal](create-private-endpoint-portal.md), [Azure PowerShell](create-private-endpoint-powershell.md), or the [Azure CLI](create-private-endpoint-cli.md).
private-link Create Private Link Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-bicep.md
In this quickstart, you use Bicep to create a private link service.
:::image type="content" source="./media/create-private-link-service-portal/private-link-service-qs-resources.png" alt-text="Diagram of resources created in private endpoint quickstart." lightbox="./media/create-private-link-service-portal/private-link-service-qs-resources.png"::: ## Prerequisites
private-link Create Private Link Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-cli.md
Get started creating a Private Link service that refers to your service. Give P
:::image type="content" source="./media/create-private-link-service-portal/private-link-service-qs-resources.png" alt-text="Diagram of resources created in private endpoint quickstart." lightbox="./media/create-private-link-service-portal/private-link-service-qs-resources.png"::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
private-link Create Private Link Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-portal.md
Get started creating a Private Link service that refers to your service. Give Pr
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. ### Create load balancer
In this section, you find the IP address of the private endpoint that correspond
1. In the **Overview** page of the private endpoint nic, the IP address of the endpoint is displayed in **Private IP address**. ## Next steps
private-link Create Private Link Service Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-template.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to
:::image type="content" source="./media/create-private-link-service-portal/private-link-service-qs-resources.png" alt-text="Diagram of resources created in private endpoint quickstart." lightbox="./media/create-private-link-service-portal/private-link-service-qs-resources.png"::: You can also complete this quickstart by using the [Azure portal](create-private-link-service-portal.md), [Azure PowerShell](create-private-link-service-powershell.md), or the [Azure CLI](create-private-link-service-cli.md).
private-link How To Approve Private Link Cross Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/how-to-approve-private-link-cross-subscription.md
For the private endpoint connection to complete successfully, the `Microsoft.Sto
1. Select **Create**. ## Obtain the storage account resource ID
For the private endpoint connection to complete successfully, the `Microsoft.Sto
1. Repeat the previous steps to register the `Microsoft.Network` resource provider. ## Create private endpoint
private-link Tutorial Dns On Premises Private Resolver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-dns-on-premises-private-resolver.md
The following resources are used in this tutorial to simulate an on-premises and
| Virtual network peer | **vnet-1-to-vnet-2** | Virtual network peer between the simulated on-premises network and cloud virtual network. | | Virtual network peer | **vnet-2-to-vnet-1** | Virtual network peer between the cloud virtual network and simulated on-premises network. | It takes a few minutes for the Bastion host deployment to complete. The Bastion host is used later in the tutorial to connect to the "on-premises" virtual machine to test the private endpoint. You can proceed to the next steps when the virtual network is created.
Repeat the previous steps to create a cloud virtual network for the Azure Web Ap
| Subnet name | **subnet-1** | | Subnet address range | **10.1.0.0/24** | [!INCLUDE [create-webapp.md](../../includes/create-webapp.md)]
In a production environment, these steps aren't needed and are only to simulate
10. Select **Save**. ## Test connectivity to private endpoint
In this section, you use the virtual machine you created in the previous step to
:::image type="content" source="./media/tutorial-dns-on-premises-private-resolver/web-app-ext-403.png" alt-text="Screenshot of web browser showing a blue page with Error 403 for external web app address." border="true"::: ## Next steps
private-link Tutorial Inspect Traffic Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-inspect-traffic-azure-firewall.md
If you don't have an Azure subscription, create a [free account](https://azure.m
Sign in to the [Azure portal](https://portal.azure.com). [!INCLUDE [virtual-network-create-private-endpoint.md](../../includes/virtual-network-create-private-endpoint.md)] ## Deploy Azure Firewall
Create an application rule to allow communication from **vnet-1** to the private
1. In the log query output, verify **server-name.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **Rule**. ## Next steps
private-link Tutorial Private Endpoint Sql Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-cli.md
az vm create \
--admin-username azureuser ``` ## Create an Azure SQL server
private-link Tutorial Private Endpoint Sql Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
Sign in to the [Azure portal](https://portal.azure.com). ## <a name ="create-a-private-endpoint"></a>Create an Azure SQL server and private endpoint
In this section, you use the virtual machine you created in the previous steps t
1. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool. ## Next steps
private-link Tutorial Private Endpoint Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-powershell.md
New-AzVMConfig @parameters2 | Set-AzVMOperatingSystem -Windows @parameters3 | Se
New-AzVM -ResourceGroupName 'CreateSQLEndpointTutorial-rg' -Location 'eastus' -VM $vmConfig ``` ## Create an Azure SQL server
private-link Tutorial Private Endpoint Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-storage-portal.md
In this tutorial, you learn how to:
Sign in to the [Azure portal](https://portal.azure.com). ## Disable public access to storage account
Before you create the private endpoint, it's recommended to disable public acces
1. Select **Create**. ## Storage access key
In this section, you use the virtual machine you created in the previous steps t
1. Close the connection to **vm-1**. ## Next steps
public-multi-access-edge-compute-mec Quickstart Create Vm Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/quickstart-create-vm-azure-resource-manager-template.md
In this quickstart, you learn how to use an Azure Resource Manager (ARM) template to deploy an Ubuntu Linux virtual machine (VM) in Azure public multi-access edge compute (MEC). ## Prerequisites
reliability Migrate Workload Aks Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-workload-aks-mysql.md
Using the Application Gateway Ingress Controller add-on with your AKS cluster is
#### Azure Bastion
-*Regional*: Azure Bastion is deployed within VNets or peered VNets and is associated to an Azure region. For more information, se [Bastion FAQ](../bastion/bastion-faq.md#dr).
+*Regional*: Azure Bastion is deployed within VNets or peered VNets and is associated with an Azure region. For more information, se [Reliability in Azure Bastion](reliability-bastion.md).
#### Azure Container Registry (ACR)
For your application tier, please review the business continuity and disaster re
Learn more about: > [!div class="nextstepaction"]
-> [Azure Services that support Availability Zones](availability-zones-service-support.md#azure-services-with-availability-zone-support))
+> [Azure Services that support Availability Zones](availability-zones-service-support.md#azure-services-with-availability-zone-support)
reliability Overview Reliability Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview-reliability-guidance.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
|Azure App Service|[Azure App Service](./reliability-app-service.md)| [Azure App Service](reliability-app-service.md#cross-region-disaster-recovery-and-business-continuity)| |Azure Application Gateway (V2)|[Autoscaling and High Availability)](../application-gateway/application-gateway-autoscaling-zone-redundant.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|| |Azure Backup|[Reliability in Azure Backup](reliability-backup.md)| [Reliability in Azure Backup](reliability-backup.md) |
-|Azure Bastion||[How do I incorporate Azure Bastion in my Disaster Recovery plan?](../bastion/bastion-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#dr) |
+|Azure Bastion|[Reliability in Azure Bastion](reliability-bastion.md) |[Reliability in Azure Bastion](reliability-bastion.md) |
|Azure Batch|[Reliability in Azure Batch](reliability-batch.md)| [Reliability in Azure Batch](reliability-batch.md#cross-region-disaster-recovery-and-business-continuity) | |Azure Cache for Redis|[Enable zone redundancy for Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Configure passive geo-replication for Premium Azure Cache for Redis instances](../azure-cache-for-redis/cache-how-to-geo-replication.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Communications Gateway|[Reliability in Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Reliability in Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Reliability Bastion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-bastion.md
+
+ Title: Reliability in Azure Bastion
+description: Find out about reliability in Azure Bastion
+++++ Last updated : 06/24/2024+++
+# Reliability in Azure Bastion
+
+This article describes reliability support in Azure Bastion and covers both intra-regional resiliency with [availability zones](#availability-zone-support) and information on [cross-region recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity).
+
+For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
+
+## Availability zone support
+++
+Bastion support for availability zones with a [zone-redundant](./availability-zones-overview.md#zonal-and-zone-redundant-services) configuration is currently in preview.
+
+Previously deployed Bastion resources may be zone-redundant and are limited to the following regions:
+- Korea Central
+- Southeast Asia
+
+### Prerequisites
+
+For a zone-redundant deployment, your Bastion resource must be in one of the following regions:
+
+- East US
+- Australia East
+- East US 2
+- Central US
+- Qatar Central
+- South Africa North
+- West Europe
+- West US 2
+- North Europe
+- Sweden Central
+- UK South
+- Canada Central
+
+### SLA improvements
+
+There's no change to pricing for availability zone support.
+
+### Create a resource with availability zones enabled
+
+To choose a region for a zone-redundant configuration:
+
+1. Go to the [Azure portal](https://portal.azure.com).
+1. [Create your Bastion resource](/azure/bastion/tutorial-create-host-portal).
+
+ - For **Region**, select one of the regions listed in the [Prerequisites section](#prerequisites).
+ - For **Availability zone**, select the zones.
+
+ :::image type="content" source="./media/reliability-bastion/create-bastion-zonal.png" alt-text="Screenshot showing the Availability zone setting while creating a Bastion resource.":::
+
+>[!NOTE]
+>You can't change the availability zone setting after your Bastion resource is deployed.
++
+### Zone down experience
+
+When a zone goes down, the VM and Bastion should still be accessible. See [Reliability in Virtual Machines: Zone down experience](./reliability-virtual-machines.md#zone-down-experience) for more information on the VM zone down experience.
+
+### Migrate to availability zone support
+
+Migration from non-availability zone support to availability zone support isn't possible. Instead, you need to [create a Bastion resource](/azure/bastion/tutorial-create-host-portal) in the new region and delete the old one.
+
+### Cross-region disaster recovery and business continuity
++
+Azure Bastion is deployed within virtual networks or peered virtual networks, and is associated with an Azure region. You're responsible for deploying Azure Bastion to a Disaster Recovery (DR) site virtual network.
++
+If there's an Azure region failure:
+
+1. Perform a failover operation for your VMs to the DR region. For more information on diaster recovery failover for VMs, see [Reliability in Azure Virtual Machines](./reliability-virtual-machines.md).
+
+2. Use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
+
+## Related content
+
+> [!div class="nextstepaction"]
+> [Reliability in Azure](/azure/availability-zones/overview)
role-based-access-control Custom Roles Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-bicep.md
If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own [custom roles](custom-roles.md). This article describes how to create or update a custom role using Bicep. To create a custom role, you specify a role name, role permissions, and where the role can be used. In this article, you create a role named _Custom Role - RG Reader_ with resource permissions that can be assigned at a subscription scope or lower.
role-based-access-control Custom Roles Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-powershell.md
If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs o
For a step-by-step tutorial on how to create a custom role, see [Tutorial: Create an Azure custom role using Azure PowerShell](tutorial-custom-role-powershell.md). ## Prerequisites
role-based-access-control Custom Roles Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-template.md
If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own [custom roles](custom-roles.md). This article describes how to create or update a custom role using an Azure Resource Manager template (ARM template). To create a custom role, you specify a role name, permissions, and where the role can be used. In this article, you create a role named _Custom Role - RG Reader_ with resource permissions that can be assigned at a subscription scope or lower.
role-based-access-control Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md
As a Global Administrator in Microsoft Entra ID, you might not have access to all subscriptions and management groups in your directory. This article describes the ways that you can elevate your access to all subscriptions and management groups. ## Why would you need to elevate your access?
role-based-access-control Quickstart Role Assignments Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/quickstart-role-assignments-bicep.md
[Azure role-based access control (Azure RBAC)](overview.md) is the way that you manage access to Azure resources. In this quickstart, you create a resource group and grant a user access to create and manage virtual machines in the resource group. This quickstart uses Bicep to grant the access. ## Prerequisites
role-based-access-control Quickstart Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/quickstart-role-assignments-template.md
[Azure role-based access control (Azure RBAC)](overview.md) is the way that you manage access to Azure resources. In this quickstart, you create a resource group and grant a user access to create and manage virtual machines in the resource group. This quickstart uses an Azure Resource Manager template (ARM template) to grant the access. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
role-based-access-control Role Assignments List Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-rest.md
> [!NOTE] > If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](../lighthouse/overview.md), role assignments authorized by that service provider won't be shown here. Similarly, users in the service provider tenant won't see role assignments for users in a customer's tenant, regardless of the role they've been assigned. ## Prerequisites
role-based-access-control Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-powershell.md
[!INCLUDE [Azure RBAC definition grant access](../../includes/role-based-access-control/definition-grant.md)] This article describes how to assign roles using Azure PowerShell. ## Prerequisites
role-based-access-control Tutorial Custom Role Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-custom-role-powershell.md
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
role-based-access-control Tutorial Role Assignments Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-group-powershell.md
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
role-based-access-control Tutorial Role Assignments User Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-user-powershell.md
In this tutorial, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
route-server Quickstart Configure Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-template.md
This quickstart helps you learn how to use an Azure Resource Manager template (ARM template) to deploy an Azure Route Server into a new or existing virtual network. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button to open the template in the Azure portal.
sap Quickstart Create High Availability Namecustom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-create-high-availability-namecustom.md
After you deploy infrastructure and [install SAP software](install-software.md)
- A single or cluster of Database VMs, which make up a single Database instance in the VIS. - A single Application Server VM, which makes up a single Application instance in the VIS. Depending on the number of Application Servers being deployed or registered, there can be multiple application instances. ## Right Size the SAP system you want to deploy
sap Quickstart Install High Availability Namecustom Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-install-high-availability-namecustom-cli.md
After you [deploy infrastructure](deploy-s4hana.md) and install SAP software wit
- For an example, see the Red Hat documentation for [Creating a Microsoft Entra Application](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/deploying_red_hat_enterprise_linux_7_on_public_cloud_platforms/configuring-rhel-high-availability-on-azure_cloud-content#azure-create-an-azure-directory-application-in-ha_configuring-rhel-high-availability-on-azure). - To avoid frequent password expiry, use the Azure Command-Line Interface (Azure CLI) to create the Service Principal identifier and password instead of the Azure portal. ## Create *json* configuration file
sap Tutorial Create High Availability Name Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/tutorial-create-high-availability-name-custom.md
This tutorial shows you how to use Azure CLI to deploy infrastructure for an SAP
- A single or cluster of Database VMs, which make up a single Database instance in the VIS. - A single Application Server VM, which makes up a single Application instance in the VIS. Depending on the number of Application Servers being deployed or registered, there can be multiple application instances. ## Understand the SAP certified Azure SKUs available for your deployment type
sap Hana Connect Vnet Express Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-connect-vnet-express-route.md
First, create an Azure ExpressRoute gateway on your virtual network. This gatewa
> [!NOTE] > This step can take up to 30 minutes to complete. You create the new gateway in the designated Azure subscription and then connect it to the specified Azure virtual network. - If a gateway already exists, check whether it's an ExpressRoute gateway. If it isn't an ExpressRoute gateway, delete the gateway and recreate it as an ExpressRoute gateway. If an ExpressRoute gateway is already established, skip to the following section of this article, [Link virtual networks](#link-virtual-networks).
sap High Availability Guide Windows Netapp Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-netapp-files-smb.md
Perform the following steps, as preparation for using Azure NetApp Files.
When considering Azure NetApp Files for the SAP Netweaver architecture, be aware of the following important considerations: -- The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1 TiB increments.-- The minimum volume is 100 GiB
+- For sizing requirements of Azure NetApp Files volumes and capacity pools, see [Azure NetApp Files resource limits](../../azure-netapp-files/azure-netapp-files-resource-limits.md) and [Create a capacity pool for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md).
- The selected virtual network must have a subnet, delegated to Azure NetApp Files. - The throughput and performance characteristics of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). While sizing the SAP Azure NetApp volumes, make sure that the resulting throughput meets the application requirements.
sap Sap Ascs Ha Multi Sid Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-shared-disk.md
This article focuses on how to move from a single ASCS/SCS installation to an SA
For more information about load-balancer limits, see the "Private front-end IP per load balancer" section in [Networking limits: Azure Resource Manager][networking-limits-azure-resource-manager]. ## Prerequisites
search Cognitive Search Aml Skill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-aml-skill.md
- ignite-2023 - build-2024 Previously updated : 05/28/2024 Last updated : 06/26/2024 # AML skill in an Azure AI Search enrichment pipeline
-> [!IMPORTANT]
-> This skill is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this skill.
+> [!IMPORTANT]
+> This skill is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Preview REST APIs support this skill.
The **AML** skill allows you to extend AI enrichment with a custom [Azure Machine Learning (AML)](../machine-learning/overview-what-is-azure-machine-learning.md) model. Once an AML model is [trained and deployed](../machine-learning/concept-azure-machine-learning-architecture.md#workspace), an **AML** skill integrates it into AI enrichment. Like other built-in skills, an **AML** skill has inputs and outputs. The inputs are sent to your deployed AML online endpoint as a JSON object, which outputs a JSON payload as a response along with a success status code. Your data is processed in the [Geo](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) where your model is deployed. The response is expected to have the outputs specified by your **AML** skill. Any other response is considered an error and no enrichments are performed.
-If you're using the [Azure AI Studio model catalog vectorizer (preview)](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) for integrated vectorization at query time, you should also use the **AML** skill for integrated vectorization during indexing. See [How to implement integrated vectorization using models from Azure AI Studio](vector-search-integrated-vectorization-ai-studio.md) for instructions. This scenario is supported through the 2024-05-01-preview REST API and the Azure portal.
+The **AML** skill is a preview feature, but depending on the endpoint, you can call it in a skillset that targets a stable API version. For example, a skillset that's created using 2023-11-01 stable API can include an **AML** skill even though it's a preview feature.
+
+Starting in 2024-05-01-preview REST API and in the Azure portal (which also targets the 2024-05-01-preview), Azure AI Search introduced the [Azure AI Studio model catalog vectorizer](vector-search-vectorizer-azure-machine-learning-ai-studio-catalog.md) for query time connections to the model catalog in Azure AI Studio. If you want to use that vectorizer for queries, the **AML** skill is the *indexing counterpart* for generating embeddings using a model in the Azure AI Studio model catalog.
+
+During indexing, the **AML** skill can connect to the model catalog to generate vectors for the index. At query time, queries can use a vectorizer to connect to the same model to vectorize text strings for a vector query. In this workflow, the **AML** skill and the model catalog vectorizer should be used together so that you're using the same embedding model for both indexing and queries. See [How to implement integrated vectorization using models from Azure AI Studio](vector-search-integrated-vectorization-ai-studio.md) for details on this workflow.
> [!NOTE] > The indexer will retry twice for certain standard HTTP status codes returned from the AML online endpoint. These HTTP status codes are:
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-attach-cognitive-services.md
Last updated 01/11/2024
When configuring an optional [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure AI Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable [**Azure AI multi-service resource**](../ai-services/multi-service-resource.md?pivots=azportal).
-A multi-service resource references a set of Azure AI services as the offering, rather than individual services, with access granted through a single API key. This key is specified in a [**skillset**](/rest/api/searchservice/create-skillset) and allows Microsoft to charge you for using these
+A multi-service resource references a set of Azure AI services as the offering, rather than individual services, with access granted through a single API key. This key is specified in a [**skillset**](/rest/api/searchservice/skillsets/create) and allows Microsoft to charge you for using these
+ [Azure AI Vision](../ai-services/computer-vision/overview.md) for image analysis and optical character recognition (OCR) + [Azure AI Language](../ai-services/language-service/overview.md) for language detection, entity recognition, sentiment analysis, and key phrase extraction
If you leave the property unspecified, your search service attempts to use the f
1. Create an [Azure AI multi-service resource](../ai-services/multi-service-resource.md?pivots=azportal) in the [same region](#same-region-requirement) as your search service.
-1. Create or update a skillset, specifying `cognitiveServices` section in the body of the [skillset request](/rest/api/searchservice/create-skillset):
+1. Create or update a skillset, specifying `cognitiveServices` section in the body of the [skillset request](/rest/api/searchservice/skillsets/create):
```http PUT https://[servicename].search.windows.net/skillsets/[skillset name]?api-version=2023-11-01
Putting it all together, you'd pay about $57.00 to ingest 1,000 PDF documents of
+ [Azure AI Search pricing page](https://azure.microsoft.com/pricing/details/search/) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [Create Skillset (REST)](/rest/api/searchservice/create-skillset)
++ [Create Skillset (REST)](/rest/api/searchservice/skillsets/create) + [How to map enriched fields](cognitive-search-output-field-mapping.md)
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-annotations-syntax.md
If you're having trouble with specifying skill inputs, these tips might help you
+ [Skill context and input annotation language](cognitive-search-skill-annotation-language.md) + [How to integrate a custom skill into an enrichment pipeline](cognitive-search-custom-skill-interface.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [Create Skillset (REST)](/rest/api/searchservice/create-skillset)
++ [Create Skillset (REST)](/rest/api/searchservice/skillsets/create) + [How to map enriched fields to an index](cognitive-search-output-field-mapping.md)
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-image-scenarios.md
When `imageAction` is set to a value other than "none", the new *normalized_imag
This section supplements the [skill reference](cognitive-search-predefined-skills.md) articles by providing context for working with skill inputs, outputs, and patterns, as they relate to image processing.
-1. [Create or update a skillset](/rest/api/searchservice/create-skillset) to add skills.
+1. [Create or update a skillset](/rest/api/searchservice/skillsets/create) to add skills.
1. Add templates for OCR and Image Analysis from the portal, or copy the definitions from the [skill reference](cognitive-search-predefined-skills.md) documentation. Insert them into the skills array of your skillset definition.
search Cognitive Search Create Custom Skill Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-create-custom-skill-example.md
Congratulations! You've created your first custom skill. Now you can follow the
+ [Power Skills: a repository of custom skills](https://github.com/Azure-Samples/azure-search-power-skills) + [Add a custom skill to an AI enrichment pipeline](cognitive-search-custom-skill-interface.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [Create Skillset (REST)](/rest/api/searchservice/create-skillset)
++ [Create Skillset (REST)](/rest/api/searchservice/skillsets/create) + [How to map enriched fields](cognitive-search-output-field-mapping.md)
search Cognitive Search Custom Skill Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-interface.md
This article covered the interface requirements necessary for integrating a cust
+ [Power Skills: a repository of custom skills](https://github.com/Azure-Samples/azure-search-power-skills) + [Example: Creating a custom skill for AI enrichment](cognitive-search-create-custom-skill-example.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [Create Skillset (REST)](/rest/api/searchservice/create-skillset)
++ [Create Skillset (REST)](/rest/api/searchservice/skillsets/create) + [How to map enriched fields](cognitive-search-output-field-mapping.md)
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md
Last updated 01/10/2024
A skillset defines operations that generate textual content and structure from documents that contain images or unstructured text. Examples are OCR for images, entity recognition for undifferentiated text, and text translation. A skillset executes after text and images are extracted from an external data source, and after [field mappings](search-indexer-field-mappings.md) are processed.
-This article explains how to create a skillset using [REST APIs](/rest/api/searchservice/create-skillset), but the same concepts and steps apply to other programming languages.
+This article explains how to create a skillset using [REST APIs](/rest/api/searchservice/skillsets/create), but the same concepts and steps apply to other programming languages.
Rules for skillset definition include:
Indexers drive skillset execution. You need an [indexer](search-howto-create-ind
## Add a skillset definition
-Start with the basic structure. In the [Create Skillset REST API](/rest/api/searchservice/create-skillset), the body of the request is authored in JSON and has the following sections:
+Start with the basic structure. In the [Create Skillset REST API](/rest/api/searchservice/skillsets/create), the body of the request is authored in JSON and has the following sections:
```json {
search Cognitive Search Skill Textsplit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-textsplit.md
Parameters are case-sensitive.
|--|-| | `textSplitMode` | Either `pages` or `sentences`. Pages have a configurable maximum length, but the skill attempts to avoid truncating a sentence so the actual length might be smaller. Sentences are a string that terminates at sentence-ending punctuation, such as a period, question mark, or exclamation point, assuming the language has sentence-ending punctuation. | | `maximumPageLength` | Only applies if `textSplitMode` is set to `pages`. This parameter refers to the maximum page length in characters as measured by `String.Length`. The minimum value is 300, the maximum is 50000, and the default value is 5000. The algorithm does its best to break the text on sentence boundaries, so the size of each chunk might be slightly less than `maximumPageLength`. |
-| `pageOverlapLength` | Only applies if `textSplitMode` is set to `pages`. Each page starts with this number of characters from the end of the previous page. If this parameter is set to 0, there's no overlapping text on successive pages. This parameter is supported in [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP#splitskill&preserve-view=true) REST API and in Azure SDK beta packages that have been updated to support integrated vectorization. This [example](#example-for-chunking-and-vectorization) includes the parameter. |
-| `maximumPagesToTake` | Only applies if `textSplitMode` is set to `pages`. Number of pages to return. The default is 0, which means to return all pages. You should set this value if only a subset of pages are needed. This parameter is supported in [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP#splitskill&preserve-view=true) REST API and in Azure SDK beta packages that have been updated to support integrated vectorization. This [example](#example-for-chunking-and-vectorization) includes the parameter.|
+| `pageOverlapLength` | Only applies if `textSplitMode` is set to `pages`. Each page starts with this number of characters from the end of the previous page. If this parameter is set to 0, there's no overlapping text on successive pages. This parameter is supported in [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP#splitskill&preserve-view=true) and newer preview REST APIs, and in Azure SDK beta packages that have been updated to support integrated vectorization. This [example](#example-for-chunking-and-vectorization) includes the parameter. |
+| `maximumPagesToTake` | Only applies if `textSplitMode` is set to `pages`. Number of pages to return. The default is 0, which means to return all pages. You should set this value if only a subset of pages are needed. This parameter is supported in [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP#splitskill&preserve-view=true) and newer preview REST APIs, and in Azure SDK beta packages that have been updated to support integrated vectorization. This [example](#example-for-chunking-and-vectorization) includes the parameter.|
| `defaultLanguageCode` | (optional) One of the following language codes: `am, bs, cs, da, de, en, es, et, fr, he, hi, hr, hu, fi, id, is, it, ja, ko, lv, no, nl, pl, pt-PT, pt-BR, ru, sk, sl, sr, sv, tr, ur, zh-Hans`. Default is English (en). A few things to consider: <ul><li>Providing a language code is useful to avoid cutting a word in half for nonwhitespace languages such as Chinese, Japanese, and Korean.</li><li>If you don't know the language in advance (for example, if you're using the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) to detect language), we recommend the `en` default. </li></ul> | ## Skill Inputs
search Cognitive Search Tutorial Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob.md
POST {{baseUrl}}/datasources?api-version=2023-11-01 HTTP/1.1
### Step 2: Create a skillset
-Call [Create Skillset](/rest/api/searchservice/create-skillset) to specify which enrichment steps are applied to your content. Skills execute in parallel unless there's a dependency.
+Call [Create Skillset](/rest/api/searchservice/skillsets/create) to specify which enrichment steps are applied to your content. Skills execute in parallel unless there's a dependency.
```http ### Create a skillset
search Index Projections Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-projections-concept-intro.md
- ignite-2023 Previously updated : 10/26/2023 Last updated : 06/25/2024 # Index projections in Azure AI Search > [!Important]
-> Index projections are in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) REST APIs, Azure portal, and beta client libraries that have been updated to include the feature.
+> Index projections are in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST APIs, Azure portal, and beta client libraries that have been updated to include the feature.
*Index projections* are a component of a skillset definition that defines the shape of a secondary index, supporting a one-to-many index pattern, where content from an enrichment pipeline can target multiple indexes.
Because index projections effectively generate "child" documents for each "paren
### [**REST**](#tab/kstore-rest)
-REST API version `2023-10-01-Preview` can be used to create index projections through additions to a skillset.
+You can use `2023-10-01-Preview` or newer preview REST APIs to create index projections through additions to a skillset. We recommend the latest preview API.
-+ [Create Skillset (api-version=2023-10-01-Preview)](/rest/api/searchservice/skillsets/create?view=rest-searchservice-2023-10-01-preview&preserve-view=true)
-+ [Create or Update Skillset (api-version=2023-10-01-Preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true)
++ [Create Skillset (api-version=2024-05-01-preview)](/rest/api/searchservice/skillsets/create?view=rest-searchservice-2024-05-01-preview&preserve-view=true)++ [Create or Update Skillset (api-version=2024-05-01-preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) ### [**.NET**](#tab/kstore-csharp)
search Keyless Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/keyless-connections.md
+
+ Title: Use keyless connections with Azure AI Search
+description: Use keyless connections with an Azure Identity library for authentication and authorization with Azure AI Search.
+ Last updated : 06/05/2024++++
+#customer intent: As a developer, I want to use keyless connections so that I don't leak secrets.
++
+# Use Azure AI Search without keys
+
+In your application code, you can set up a keyless connection to Azure AI Search that uses Microsoft Entra ID and roles for authentication and authorization. Application requests to most Azure services must be authenticated with keys or keyless connections. Developers must be diligent to never expose the keys in an unsecure location. Anyone who gains access to the key is able to authenticate to the service. Keyless authentication offers improved management and security benefits over the account key because there's no key (or connection string) to store.
+
+Keyless connections are enabled with the following steps:
+
+* Configure your authentication.
+* Set environment variables, as needed.
+* Use an Azure Identity library credential type to create an Azure AI Search client object.
+
+## Prerequisites
+
+The following steps need to be completed for both local development and production workloads:
+
+* [Create an AI Search resource](#create-an-ai-search-resource)
+* [Enable role-based access on your search service](search-security-enable-roles.md)
+* [Install Azure Identity client library](#install-azure-identity-client-library)
+
+### Create an AI Search resource
+
+Before continuing with this article, you need an Azure AI Search resource to work with. If you don't have a resource, [create your resource](search-create-service-portal.md) now. [Enable role-based access control (RBAC)](search-security-enable-roles.md) for the resource.
+
+### Install Azure Identity client library
+
+Before working locally without keyless, update your AI Search enabled code with the Azure Identity client library.
+
+#### [.NET](#tab/csharp)
+
+Install the [Azure Identity client library for .NET](https://www.nuget.org/packages/Azure.Identity):
+
+```dotnetcli
+dotnet add package Azure.Identity
+```
+
+#### [Java](#tab/java)
+
+Install the [Azure Identity client library for Java](https://mvnrepository.com/artifact/com.azure/azure-identity) with the following POM file:
+
+```xml
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.10.0</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
+```
+
+#### [JavaScript](#tab/javascript)
+
+Install the [Azure Identity client library for JavaScript](https://www.npmjs.com/package/@azure/identity):
+
+```console
+npm install --save @azure/identity
+```
+
+#### [Python](#tab/python)
+
+Install the [Azure Identity client library for Python](https://pypi.org/project/azure-identity/):
+
+```console
+pip install azure-identity
+```
+++
+## Update source code to use DefaultAzureCredential
+
+The Azure Identity library's `DefaultAzureCredential` allows you to run the same code in the local development environment and in the Azure cloud. Create a single credential and reuse the credential instance as needed to take advantage of token caching.
+
+#### [.NET](#tab/csharp)
+
+For more information on `DefaultAzureCredential` for .NET, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme#defaultazurecredential).
+
+```csharp
+using Azure;
+using Azure.Search.Documents;
+using Azure.Search.Documents.Indexes;
+using Azure.Search.Documents.Indexes.Models;
+using Azure.Search.Documents.Models;
+using Azure.Identity;
+using System;
+using static System.Environment;
+
+string endpoint = GetEnvironmentVariable("AZURE_SEARCH_ENDPOINT");
+string indexName = "my-search-index";
+
+DefaultAzureCredential credential = new();
+SearchClient searchClient = new(new Uri(endpoint), indexName, credential);
+SearchIndexClient searchIndexClient = new(endpoint, credential);
+```
+
+#### [Java](#tab/java)
+
+For more information on `DefaultAzureCredential` for Java, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme#defaultazurecredential).
+
+```java
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import com.azure.search.documents.SearchAsyncClient;
+import com.azure.search.documents.SearchClientBuilder;
+import com.azure.search.documents.SearchDocument;
+import com.azure.search.documents.indexes.SearchIndexAsyncClient;
+import com.azure.search.documents.indexes.SearchIndexClientBuilder;
+
+String ENDPOINT = System.getenv("AZURE_SEARCH_ENDPOINT");
+String INDEX_NAME = "my-index";
+
+DefaultAzureCredential credential = new DefaultAzureCredentialBuilder().build();
+
+// Sync SearchClient
+SearchClient searchClient = new SearchClientBuilder()
+ .endpoint(ENDPOINT)
+ .credential(credential)
+ .indexName(INDEX_NAME)
+ .buildClient();
+
+// Sync IndexClient
+SearchIndexClient searchIndexClient = new SearchIndexClientBuilder()
+ .endpoint(ENDPOINT)
+ .credential(credential)
+ .buildClient();
+
+// Async SearchClient
+SearchAsyncClient searchAsyncClient = new SearchClientBuilder()
+ .endpoint(ENDPOINT)
+ .credential(credential)
+ .indexName(INDEX_NAME)
+ .buildAsyncClient();
+
+// Async IndexClient
+SearchIndexAsyncClient searchIndexAsyncClient = new SearchIndexClientBuilder()
+ .endpoint(ENDPOINT)
+ .credential(credential)
+ .buildAsyncClient();
+```
+
+#### [JavaScript](#tab/javascript)
+
+For more information on `DefaultAzureCredential` for JavaScript, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme#defaultazurecredential).
++
+```javascript
+import { DefaultAzureCredential } from "@azure/identity";
+import {
+ SearchClient,
+ SearchIndexClient
+} from "@azure/search-documents";
+
+const AZURE_SEARCH_ENDPOINT = process.env.AZURE_SEARCH_ENDPOINT;
+const index = "my-index";
+const credential = new DefaultAzureCredential();
+
+// To query and manipulate documents
+const searchClient = new SearchClient(
+ AZURE_SEARCH_ENDPOINT,
+ index,
+ credential
+);
+
+// To manage indexes and synonymmaps
+const indexClient = new SearchIndexClient(
+ AZURE_SEARCH_ENDPOINT,
+ credential
+);
+```
+
+#### [Python](#tab/python)
+
+For more information on `DefaultAzureCredential` for Python, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme#defaultazurecredential).
+
+```python
+import os
+from azure.search.documents import SearchClient
+from azure.identity import DefaultAzureCredential, AzureAuthorityHosts
+
+# Azure Public Cloud
+audience = "https://search.windows.net"
+authority = AzureAuthorityHosts.AZURE_PUBLIC_CLOUD
+
+service_endpoint = os.environ["AZURE_SEARCH_ENDPOINT"]
+index_name = os.environ["AZURE_SEARCH_INDEX_NAME"]
+credential = DefaultAzureCredential(authority=authority)
+
+search_client = SearchClient(
+ endpoint=service_endpoint,
+ index=index_name,
+ credential=credential,
+ audience=audience)
+
+search_index_client = SearchIndexClient(
+ endpoint=service_endpoint,
+ credential=credential,
+ audience=audience)
+```
++++
+## Local development
+
+Local development without keyless includes these steps:
+
+- Assign your personal identity with RBAC roles on the specific resource.
+- Use a tool to authenticate with Azure.
+- Establish environment variables for your resource.
+
+### Roles for local development
+
+As a local developer, your Azure identity needs full control of your service. This control is provided with RBAC roles. To manage your resource during development, these are the suggested roles:
+
+- Search Service Contributor
+- Search Index Data Contributor
+- Search Index Data Reader
+
+Find your personal identity with one of the following tools. Use that identity as the `<identity-id>` value.
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Sign in to Azure CLI.
+
+ ```azurecli
+ az login
+ ```
+
+2. Get your personal identity.
+
+ ```azurecli
+ az ad signed-in-user show \
+ --query id -o tsv
+ ```
+
+3. Assign the role-based access control (RBAC) role to the identity for the resource group.
+
+ ```azurecli
+ az role assignment create \
+ --role "<role-name>" \
+ --assignee "<identity-id>" \
+ --scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>"
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Sign in with PowerShell.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+2. Get your personal identity.
+
+ ```azurepowershell
+ (Get-AzContext).Account.ExtendedProperties.HomeAccountId.Split('.')[0]
+ ```
+
+3. Assign the role-based access control (RBAC) role to the identity for the resource group.
+
+ ```azurepowershell
+ New-AzRoleAssignment -ObjectId "<identity-id>" -RoleDefinitionName "<role-name>" -Scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>"
+ ```
+
+#### [Azure portal](#tab/portal)
+
+1. Use the steps found here: [find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id) in the Azure portal.
+
+2. Use the steps found at [open the Add role assignment page](search-security-rbac.md) in the Azure portal.
+
+
+
+Where applicable, replace `<identity-id>`, `<subscription-id>`, and `<resource-group-name>` with your actual values.
++
+### Authentication for local development
+
+Use a tool in your local development environment to authentication to Azure identity. Once you're authenticated, the `DefaultAzureCredential` instance in your source code finds and uses the authentication.
+
+#### [.NET](#tab/csharp)
+
+Select a tool for [authentication during local development](/dotnet/api/overview/azure/identity-readme#authenticate-the-client).
+
+#### [Java](#tab/java)
+
+Select a tool for [authentication during local development](/java/api/overview/azure/identity-readme#authenticate-the-client).
+
+#### [JavaScript](#tab/javascript)
+
+Select a tool for [authentication during local development](/javascript/api/overview/azure/identity-readme#authenticate-the-client-in-development-environment).
+
+#### [Python](#tab/python)
+
+Select a tool for [authentication during local development](/python/api/overview/azure/identity-readme#authenticate-during-local-development).
+++
+### Configure environment variables for local development
+
+To connect to Azure AI Search, your code needs to know your resource endpoint.
+
+Create an environment variable named `AZURE_SEARCH_ENDPOINT` for your Azure AI Search endpoint. This URL generally has the format `https://<YOUR-RESOURCE-NAME>.search.windows.net/`.
+
+## Production workloads
+
+Deploy production workloads includes these steps:
+
+- Choose RBAC roles that adhere to the principle of least privilege.
+- Assign RBAC roles to your production identity on the specific resource.
+- Set up environment variables for your resource.
+
+### Roles for production workloads
+
+To create your production resources, you need to create a user-assigend [managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) then assign that identity to your resources with the correct roles.
+
+The following role is suggested for a production application:
+
+|Role name|Id|
+|--|--|
+|Search Index Data Reader|1407120a-92aa-4202-b7e9-c0e197c71c8f|
+
+### Authentication for production workloads
+
+Use the following Azure AI Search **Bicep template** to create the resource and set the authentication for the `identityId`. Bicep requires the role ID. The `name` shown in this Bicep snippet isn't the Azure role; it's specific to the Bicep deployment.
+
+```bicep
+// main.bicep
+param environment string = 'production'
+param roleGuid string = ''
+
+module aiSearchRoleUser 'core/security/role.bicep' = {
+ scope: aiSearchResourceGroup
+ name: 'aiSearch-role-user'
+ params: {
+ principalId: (environment == 'development') ? principalId : userAssignedManagedIdentity.properties.principalId
+ principalType: (environment == 'development') ? 'User' : 'ServicePrincipal'
+ roleDefinitionId: roleGuid
+ }
+}
+```
+
+The `main.bicep` file calls the following generic Bicep code to create any role. You have the option to create multiple RBAC roles, such as one for the user and another for production. This allows you to enable both development and production environments within the same Bicep deployment.
+
+```bicep
+// core/security/role.bicep
+metadata description = 'Creates a role assignment for an identity.'
+param principalId string // passed in from main.bicep
+
+@allowed([
+ 'Device'
+ 'ForeignGroup'
+ 'Group'
+ 'ServicePrincipal'
+ 'User'
+])
+param principalType string = 'ServicePrincipal'
+param roleDefinitionId string // Role ID
+
+resource role 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
+ name: guid(subscription().id, resourceGroup().id, principalId, roleDefinitionId)
+ properties: {
+ principalId: principalId
+ principalType: principalType
+ roleDefinitionId: resourceId('Microsoft.Authorization/roleDefinitions', roleDefinitionId)
+ }
+}
+```
+
+### Configure environment variables for production workloads
+
+To connect to Azure AI Search, your code needs to know your resource endpoint, and the ID of the managed identity.
+
+Create environment variables for your deployed and keyless Azure AI Search resource:
+
+- `AZURE_SEARCH_ENDPOINT`: This URL is the access point for your Azure AI Search resource. This URL generally has the format `https://<YOUR-RESOURCE-NAME>.search.windows.net/`.
+- `AZURE_CLIENT_ID`: This is the identity to authenticate as.
+
+## Related content
+
+* [Keyless connections developer guide](/azure/developer/intro/passwordless-overview)
+* [Azure built-in roles](/azure/role-based-access-control/built-in-roles)
search Knowledge Store Projection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projection-overview.md
A knowledge store is a logical construction that's physically expressed as a loo
## Projection definition
-Projections are specified under the "knowledgeStore" property of a [skillset](/rest/api/searchservice/create-skillset). Projection definitions are used during indexer invocation to create and load objects in Azure Storage with enriched content. If you're unfamiliar with these concepts, start with [AI enrichment](cognitive-search-concept-intro.md) for an introduction.
+Projections are specified under the "knowledgeStore" property of a [skillset](/rest/api/searchservice/skillsets/create). Projection definitions are used during indexer invocation to create and load objects in Azure Storage with enriched content. If you're unfamiliar with these concepts, start with [AI enrichment](cognitive-search-concept-intro.md) for an introduction.
The following example illustrates the placement of projections under knowledgeStore, and the basic construction. The name, type, and content source make up a projection definition.
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
Use this article to migrate data plane calls to newer versions of the [**Search
+ [`2023-11-01`](/rest/api/searchservice/search-service-api-versions#2023-11-01) is the most recent stable version.
-+ [`2024-05-01-preview`](/rest/api/searchservice/search-service-api-versions#2023-10-01-preview) is the most recent preview API version.
++ [`2024-05-01-preview`](/rest/api/searchservice/search-service-api-versions#2024-05-01-preview) is the most recent preview API version. Upgrade instructions focus on code changes that get you through breaking changes from previous versions so that existing code runs the same as before, but on the newer API version. Once your code is in working order, you can decide whether to adopt newer features. To learn more about preview features, see [vector code samples](https://github.com/Azure/azure-search-vector-samples) and [What's New](whats-new.md).
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
Last updated 06/25/2024
This article identifies all data plane and control plane features in public preview. This list is helpful for checking feature status. It also provides usage guidance and reminders to always upgrade to newer preview API versions as they roll out.
+Preview API versions are cumulative and roll up to newer Preview versions. We recommend always using the latest preview APIs for full access to all preview features.
+ Preview features are removed from this list if they're retired or transition to general availability. For announcements regarding general availability and retirement, see [Service Updates](https://azure.microsoft.com/updates/?product=search) or [What's New](whats-new.md). ## Data plane preview features |Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability | |||-||
-| [**Scalar quantization**](vector-search-how-to-configure-compression-storage.md#option-1-configure-scalar-quantization) | Index | Compress vector index size in memory and on disk using built-in scalar quantization. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true) to add a `compressions` section to a vector profile. |
-| [**Narrow data types**](vector-search-how-to-configure-compression-storage.md#option-2-assign-narrow-data-types-to-vector-fields) | Index | Assign a smaller data type on vector fields, assuming incoming data is of that data type. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true) to specify a vector field definition. [Binary vector support](vector-search-how-to-index-binary-data.md) is added in 2024-05-01-preview.|
-| [**stored property**](vector-search-how-to-configure-compression-storage.md#option-3-set-the-stored-property-to-remove-retrievable-storage) | Index | Boolean that reduces storage of vector indexes by *not* storing retrievable vectors. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-03-01-preview&preserve-view=true) to set `stored` on a vector field. |
-| [**Vectorizers**](vector-search-integrated-vectorization.md) | Queries | Text-to-vector conversion during query execution. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) to define a `vectorizer`. [Search POST (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2023-10-01-preview&preserve-view=true) for `vectorQueries`, 2023-10-01-Preview or later. |
-| [**Integrated vectorization**](vector-search-integrated-vectorization.md) | Index, skillset | Skills-driven data chunking and embedding during indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) for AzureOpenAIEmbedding skill and the data chunking properties of the Text Split skill. |
+| [**Scalar quantization**](vector-search-how-to-configure-compression-storage.md#option-1-configure-scalar-quantization) | Index | Compress vector index size in memory and on disk using built-in scalar quantization. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) to add a `compressions` section to a vector profile. |
+| [**Narrow data types**](vector-search-how-to-configure-compression-storage.md#option-2-assign-narrow-data-types-to-vector-fields) | Index | Assign a smaller data type on vector fields, assuming incoming data is of that data type. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) to specify a vector field definition. [Binary vector support](vector-search-how-to-index-binary-data.md) is added in 2024-05-01-preview.|
+| [**stored property**](vector-search-how-to-configure-compression-storage.md#option-3-set-the-stored-property-to-remove-retrievable-storage) | Index | Boolean that reduces storage of vector indexes by *not* storing retrievable vectors. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) to set `stored` on a vector field. |
+| [**Vectorizers**](vector-search-integrated-vectorization.md) | Queries | Text-to-vector conversion during query execution. | [Create or Update Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) to define a `vectorizer`. [Search POST (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true) for `vectorQueries`. Vectorizers should be paired with an equivalent skill that supports integrated vectorization during indexing. Skills used for embeddings during indexing include AzureOpenAIEmbedding, Azure AI Vision multimodal, AML for models in the Azure AI Studio model catalog. There are vectorizers that correspond to each one of these embedding skills. Always use the same embedding model for both queries and indexing. |
+| [**Integrated vectorization**](vector-search-integrated-vectorization.md) | Index, skillset | Skills-driven data chunking and embedding during indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) for AzureOpenAIEmbedding skill and the data chunking properties of the Text Split skill. |
| [**Import and vectorize data**](search-get-started-portal-import-vectors.md) | Azure portal | A wizard that creates a full indexing pipeline that includes data chunking and vectorization. The wizard creates all of the objects and configuration settings. | Available on all search services, in all regions. |
-| [**AzureOpenAIEmbedding skill**](cognitive-search-skill-azure-openai-embedding.md) | Applied AI (skills) | A new skill type that calls Azure OpenAI embedding model to generate embeddings during queries and indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/preview-api/create-or-update-skillset), 2023-10-01-Preview or later. Also available in the portal through the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). |
-| [**Azure AI Vision multimodal embedding skill**](cognitive-search-skill-vision-vectorize.md) | Applied AI (skills) | A new skill type that calls Azure AI Vision multimodal API to generate embeddings for text or images during indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true), 2024-05-01-Preview or later. |
-| [**Text Split skill**](cognitive-search-skill-textsplit.md) | Applied AI (skills) | Text Split has two new chunking-related properties in preview: `maximumPagesToTake`, `pageOverlapLength`. | [Create or Update Skillset (preview)](/rest/api/searchservice/preview-api/create-or-update-skillset), 2023-10-01-Preview or later. Also available in the portal through the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). |
-| [**Azure Machine Learning (AML) skill**](cognitive-search-aml-skill.md) | Applied AI (skills) | A new skill type to integrate an inferencing endpoint from Azure Machine Learning. | [Create or Update Skillset (preview)](/rest/api/searchservice/preview-api/create-or-update-skillset), 2019-05-06-preview or later. Using 2024-05-01-preview, you can use this skill to connect to a model in the Azure AI Studio model catalog. It's also available in the portal, in skillset design, assuming Azure AI Search and Azure Machine Learning services are deployed in the same subscription. |
-| [**Incremental enrichment**](cognitive-search-incremental-indexing-conceptual.md) | Applied AI (skills) | Adds caching to an enrichment pipeline, allowing you to reuse existing output if a targeted modification, such as an update to a skillset or another object, doesn't change the content. Caching applies only to enriched documents produced by a skillset.| [Create or Update Indexer (preview)](/rest/api/searchservice/preview-api/create-or-update-indexer), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. |
-| [**Index projections**](index-projections-concept-intro.md) | Applied AI (skills) | A component of a skillset definition that defines the shape of a secondary index, supporting a one-to-many index pattern, where content from an enrichment pipeline can target multiple indexes.| [Create or Update Skillset (preview)](/rest/api/searchservice/preview-api/create-or-update-skillset), 2023-10-01-Preview or later. Also available in the portal through the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). |
-| [**OneLake files indexer**](search-how-to-index-onelake-files.md) | Indexer data source | New data source for extracting searchable data and metadata data from a [lakehouse](/fabric/onelake/create-lakehouse-onelake) on top of [OneLake](/fabric/onelake/onelake-overview) | [Create or Update Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true), 2024-05-01-preview or later. |
-| [**Azure Files indexer**](search-file-storage-integration.md) | Indexer data source | New data source for indexer-based indexing from [Azure Files](https://azure.microsoft.com/services/storage/files/) | [Create or Update Data Source (preview)](/rest/api/searchservice/preview-api/create-or-update-data-source), 2021-04-30-Preview or later. |
-| [**SharePoint Online indexer**](search-howto-index-sharepoint-online.md) | Indexer data source | New data source for indexer-based indexing of SharePoint content. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to enable the feature. Use [Create or Update Data Source (preview)](/rest/api/searchservice/preview-api/create-or-update-data-source), 2020-06-30-Preview or later, or the Azure portal. |
-| [**MySQL indexer**](search-howto-index-mysql.md) | Indexer data source | New data source for indexer-based indexing of Azure MySQL data sources.| [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to enable the feature. Use [Create or Update Data Source (preview)](/rest/api/searchservice/preview-api/create-or-update-data-source), 2020-06-30-Preview or later, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. |
-| [**Azure Cosmos DB for MongoDB indexer**](search-howto-index-cosmosdb.md) | Indexer data source | New data source for indexer-based indexing through the MongoDB APIs in Azure Cosmos DB. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to enable the feature. Use [Create or Update Data Source (preview)](/rest/api/searchservice/preview-api/create-or-update-data-source), 2020-06-30-Preview or later, or the Azure portal.|
-| [**Azure Cosmos DB for Apache Gremlin indexer**](search-howto-index-cosmosdb.md) | Indexer data source | New data source for indexer-based indexing through the Apache Gremlin APIs in Azure Cosmos DB. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to enable the feature. Use [Create or Update Data Source (preview)](/rest/api/searchservice/preview-api/create-or-update-data-source), 2020-06-30-Preview or later.|
-| [**Native blob soft delete**](search-howto-index-changed-deleted-blobs.md) | Indexer data source | Applies to the Azure Blob Storage indexer. Recognizes blobs that are in a soft-deleted state, and removes the corresponding search document during indexing. | [Create or Update Data Source (preview)](/rest/api/searchservice/preview-api/create-or-update-data-source), 2020-06-30-Preview or later. |
-| [**Reset Documents**](search-howto-run-reset-indexers.md) | Indexer | Reprocesses individually selected search documents in indexer workloads. | [Reset Documents (preview)](/rest/api/searchservice/preview-api/reset-documents), 2020-06-30-Preview or later. |
-| [**speller**](speller-how-to-add.md) | Query | Optional spelling correction on query term inputs for simple, full, and semantic queries. | [Search Documents (preview)](/rest/api/searchservice/preview-api/search-documents), 2020-06-30-Preview or later, and Search Explorer (portal). |
-| [**Normalizers**](search-normalizers.md) | Query | Normalizers provide simple text preprocessing: consistent casing, accent removal, and ASCII folding, without invoking the full text analysis chain.| [Search Documents (preview)](/rest/api/searchservice/preview-api/search-documents), 2020-06-30-Preview or later.|
-| [**featuresMode parameter**](/rest/api/searchservice/preview-api/search-documents#query-parameters) | Relevance (scoring) | Relevance score expansion to include details: per field similarity score, per field term frequency, and per field number of unique tokens matched. You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial). | [Search Documents (preview)](/rest/api/searchservice/preview-api/search-documents), 2019-05-06-Preview or later.|
-| [**moreLikeThis**](search-more-like-this.md) | Query | Finds documents that are relevant to a specific document. This feature has been in earlier previews. | [Search Documents (preview)](/rest/api/searchservice/preview-api/search-documents) calls, in all supported API versions: 2023-10-10-Preview, 2023-07-01-Preview, 2021-04-30-Preview, 2020-06-30-Preview, 2019-05-06-Preview, 2016-09-01-Preview, 2017-11-11-Preview. |
+| [**AzureOpenAIEmbedding skill**](cognitive-search-skill-azure-openai-embedding.md) | Applied AI (skills) | A new skill type that calls Azure OpenAI embedding model to generate embeddings during queries and indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). Also available in the portal through the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). |
+| [**Azure AI Vision multimodal embedding skill**](cognitive-search-skill-vision-vectorize.md) | Applied AI (skills) | A new skill type that calls Azure AI Vision multimodal API to generate embeddings for text or images during indexing. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**Text Split skill**](cognitive-search-skill-textsplit.md) | Applied AI (skills) | Text Split has two new chunking-related properties in preview: `maximumPagesToTake`, `pageOverlapLength`. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) adds support for the preview properties. These properties are also used in the portal through the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). |
+| [**Azure Machine Learning (AML) skill**](cognitive-search-aml-skill.md) | Applied AI (skills) | AML skill integrates an inferencing endpoint from Azure Machine Learning. | [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). In previous preview APIs, it supports connections to deployed custom models in an AML workspace. Starting in the 2024-05-01-preview, you can use this skill in workflows that connect to embedding models in the Azure AI Studio model catalog. It's also available in the portal, in skillset design, assuming Azure AI Search and Azure Machine Learning services are deployed in the same subscription. |
+| [**Incremental enrichment**](cognitive-search-incremental-indexing-conceptual.md) | Applied AI (skills) | Adds caching to an enrichment pipeline, allowing you to reuse existing output if a targeted modification, such as an update to a skillset or another object, doesn't change the content. Caching applies only to enriched documents produced by a skillset.| [Create or Update Indexer (preview)](/rest/api/searchservice/indexers/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**Index projections**](index-projections-concept-intro.md) | Applied AI (skills) | A component of a skillset definition that defines the shape of a secondary index, supporting a one-to-many index pattern, where content from an enrichment pipeline can target multiple indexes.| [Create or Update Skillset (preview)](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). Also available in the portal through the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md). |
+| [**OneLake files indexer**](search-how-to-index-onelake-files.md) | Indexer data source | New data source for extracting searchable data and metadata data from a [lakehouse](/fabric/onelake/create-lakehouse-onelake) on top of [OneLake](/fabric/onelake/onelake-overview) | [Create or Update Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**Azure Files indexer**](search-file-storage-integration.md) | Indexer data source | New data source for indexer-based indexing from [Azure Files](https://azure.microsoft.com/services/storage/files/) | [Create or Update Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**SharePoint Online indexer**](search-howto-index-sharepoint-online.md) | Indexer data source | New data source for indexer-based indexing of SharePoint content. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to enable the feature. [Create or Update Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) or the Azure portal. |
+| [**MySQL indexer**](search-howto-index-mysql.md) | Indexer data source | New data source for indexer-based indexing of Azure MySQL data sources.| [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to enable the feature. [Create or Update Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true), [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. |
+| [**Azure Cosmos DB for MongoDB indexer**](search-howto-index-cosmosdb.md) | Indexer data source | New data source for indexer-based indexing through the MongoDB APIs in Azure Cosmos DB. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to enable the feature. [Create or Update Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) or the Azure portal. |
+| [**Azure Cosmos DB for Apache Gremlin indexer**](search-howto-index-cosmosdb.md) | Indexer data source | New data source for indexer-based indexing through the Apache Gremlin APIs in Azure Cosmos DB. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) to enable the feature. [Create or Update Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**Native blob soft delete**](search-howto-index-changed-deleted-blobs.md) | Indexer data source | Applies to the Azure Blob Storage indexer. Recognizes blobs that are in a soft-deleted state, and removes the corresponding search document during indexing. | [Create or Update Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**Reset Documents**](search-howto-run-reset-indexers.md) | Indexer | Reprocesses individually selected search documents in indexer workloads. | [Reset Documents (preview)](/rest/api/searchservice/indexers/reset-docs?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**speller**](speller-how-to-add.md) | Query | Optional spelling correction on query term inputs for simple, full, and semantic queries. | [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**Normalizers**](search-normalizers.md) | Query | Normalizers provide simple text preprocessing: consistent casing, accent removal, and ASCII folding, without invoking the full text analysis chain.| [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
+| [**featuresMode parameter**](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true) | Relevance (scoring) | Relevance score expansion to include details: per field similarity score, per field term frequency, and per field number of unique tokens matched. You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial). | [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true).|
+| [**moreLikeThis**](search-more-like-this.md) | Query | Finds documents that are relevant to a specific document. This feature has been in earlier previews. | [Search Documents (preview)](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true). |
## Control plane preview features
search Search Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-explorer.md
If you're using a free service, remember that you're limited to three indexes, i
## Next steps
-To learn more about query structures and syntax, use a REST client to create query expressions that use more parts of the API. The [Search POST REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2023-10-01-preview&preserve-view=true) is especially helpful for learning and exploration.
+To learn more about query structures and syntax, use a REST client to create query expressions that use more parts of the API. The [Search POST REST API](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2024-05-01-preview&preserve-view=true) is especially helpful for learning and exploration.
> [!div class="nextstepaction"] > [Create a basic query in REST](search-get-started-rest.md)
search Search Get Started Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-arm.md
Last updated 04/24/2024
This article walks you through the process for using an Azure Resource Manager (ARM) template to deploy an Azure AI Search resource in the Azure portal. Only those properties included in the template are used in the deployment. If more customization is required, such as [setting up network security](search-security-overview.md#network-security), you can update the service as a post-deployment task. To customize an existing service with the fewest steps, use [Azure CLI](search-manage-azure-cli.md) or [Azure PowerShell](search-manage-powershell.md). If you're evaluating preview features, use the [Management REST API](search-manage-rest.md).
search Search Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-bicep.md
Last updated 02/26/2024
This article walks you through the process for using a Bicep file to deploy an Azure AI Search resource in the Azure portal. Only those properties included in the template are used in the deployment. If more customization is required, such as [setting up network security](search-security-overview.md#network-security), you can update the service as a post-deployment task. To customize an existing service with the fewest steps, use [Azure CLI](search-manage-azure-cli.md) or [Azure PowerShell](search-manage-powershell.md). If you're evaluating preview features, use the [Management REST API](search-manage-rest.md).
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
- ignite-2023 Previously updated : 01/18/2024 Last updated : 06/25/2024 # Create an index alias in Azure AI Search
You can create an alias using the preview REST API, the preview SDKs, or through
### [**REST API**](#tab/rest)
-You can use the [Create or Update Alias (REST preview)](/rest/api/searchservice/preview-api/create-or-update-alias) to create an index alias.
+You can use the [Create or Update Alias (REST preview)](/rest/api/searchservice/aliases/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true) to create an index alias.
```http
-POST /aliases?api-version=2023-10-01-preview
+POST /aliases?api-version=2024-05-01-preview
{ "name": "my-alias", "indexes": ["hotel-samples-index"]
Once you've created your alias, you're ready to start using it. Aliases can be u
In the query below, instead of sending the request to `hotel-samples-index`, you can instead send the request to `my-alias` and it will be routed accordingly. ```http
-POST /indexes/my-alias/docs/search?api-version=2023-10-01-preview
+POST /indexes/my-alias/docs/search?api-version=2024-05-01-preview
{ "search": "pool spa +airport", "searchMode": any,
If you expect to make updates to a production index, specify an alias rather tha
## Swap indexes
-Now, whenever you need to update your application to point to a new index, all you need to do is update the mapping in your alias. PUT is required for updates as described in [Create or Update Alias (REST preview)](/rest/api/searchservice/preview-api/create-or-update-alias).
+Now, whenever you need to update your application to point to a new index, all you need to do is update the mapping in your alias. PUT is required for updates as described in [Create or Update Alias (REST preview)](/rest/api/searchservice/aliases/create-or-update?view=rest-searchservice-2024-05-01-preview&preserve-view=true).
```http
-PUT /aliases/my-alias?api-version=2023-10-01-preview
+PUT /aliases/my-alias?api-version=2024-05-01-preview
{ "name": "my-alias", "indexes": ["hotel-samples-index2"]
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
We recommend app-based permissions. See [limitations](#limitations-and-considera
+ Application permissions (recommended), where the indexer runs under the [identity of the SharePoint tenant](/sharepoint/dev/solution-guidance/security-apponly-azureacs) with access to all sites and files. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content.
-+ Delegated permissions, where the indexer runs under the identity of the user or app sending the request. Data access is limited to the sites and files to which the caller has access. To support delegated permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to sign in on behalf of the user. User-delegated permissions enforces token expiration every 75 minutes, per the most recent security libraries used to implement this authentication type. This is not a behavior that can be adjusted. An expired token requires manual indexing using [Run Indexer (preview)](/rest/api/searchservice/indexers/run?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true). For this reason, you might want app-based permissions instead.
++ Delegated permissions, where the indexer runs under the identity of the user or app sending the request. Data access is limited to the sites and files to which the caller has access. To support delegated permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to sign in on behalf of the user. User-delegated permissions enforces token expiration every 75 minutes, per the most recent security libraries used to implement this authentication type. This is not a behavior that can be adjusted. An expired token requires manual indexing using [Run Indexer (preview)](/rest/api/searchservice/indexers/run?view=rest-searchservice-2024-05-01-preview&tabs=HTTP&preserve-view=true). For this reason, you might want app-based permissions instead. If your Microsoft Entra organization has [conditional access enabled](../active-directory/conditional-access/overview.md) and your administrator isn't able to grant any device access for delegated permissions, you should consider app-based permissions instead. For more information, see [Microsoft Entra Conditional Access policies](./search-indexer-troubleshooting.md#azure-active-directory-conditional-access-policies).
search Search Import Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-import-data-portal.md
The wizard will output the objects in the following table. After the objects are
| [Indexer](/rest/api/searchservice/create-indexer) | A configuration object specifying a data source, target index, an optional skillset, optional schedule, and optional configuration settings for error handing and base-64 encoding. | | [Data Source](/rest/api/searchservice/create-data-source) | Persists connection information to a [supported data source](search-indexer-overview.md#supported-data-sources) on Azure. A data source object is used exclusively with indexers. | | [Index](/rest/api/searchservice/create-index) | Physical data structure used for full text search and other queries. |
-| [Skillset](/rest/api/searchservice/create-skillset) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Unless the volume of work fall under the limit of 20 transactions per indexer per day, the skillset must include a reference to an Azure AI multi-service resource that provides enrichment. |
+| [Skillset](/rest/api/searchservice/skillsets/create) | Optional. A complete set of instructions for manipulating, transforming, and shaping content, including analyzing and extracting information from image files. Unless the volume of work fall under the limit of 20 transactions per indexer per day, the skillset must include a reference to an Azure AI multi-service resource that provides enrichment. |
| [Knowledge store](knowledge-store-concept-intro.md) | Optional. Stores output from an [AI enrichment pipeline](cognitive-search-concept-intro.md) in tables and blobs in Azure Storage for independent analysis or downstream processing. | ## Benefits and limitations
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Maximum limits on storage, workloads, and quantities of indexes and other object
+ **Storage Optimized** runs on dedicated machines with more total storage, storage bandwidth, and memory than **Standard**. This tier targets large, slow-changing indexes. Storage Optimized comes in two levels: L1 and L2. ## Subscription limits ## Service limits <a name="index-limits"></a>
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
The following roles are built in. If these roles are insufficient, [create a cus
| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | Data | Read-write access to content in indexes. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. This role doesn't support index creation or management. By default, this role is for all indexes on a search service. See [Grant access to a single index](#grant-access-to-a-single-index) to narrow the scope. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | Data | Read-only access for querying search indexes. This role is for apps and users who run queries. This role doesn't support read access to object definitions. For example, you can't read a search index definition or get search service statistics. By default, this role is for all indexes on a search service. See [Grant access to a single index](#grant-access-to-a-single-index) to narrow the scope. |
+Combine these roles to get sufficient permissions for your use case.
++ > [!NOTE] > If you disable Azure role-based access, built-in roles for the control plane (Owner, Contributor, Reader) continue to be available. Disabling role-based access removes just the data-related permissions associated with those roles. If data plane roles are disabled, Search Service Contributor is equivalent to control-plane Contributor.
The following roles are built in. If these roles are insufficient, [create a cus
In this section, assign roles for: + [Service administration](#assign-roles-for-service-administration)+
+ | Role | ID|
+ | | |
+ |`Owner`|8e3af657-a8ff-443c-a75c-2fe8c4bcb635|
+ |`Contributor`|b24988ac-6180-42a0-ab88-20f7382dd24c|
+ |`Reader`|acdd72a7-3385-48ef-bd42-f606fba81ae7|
+
+ + [Development or write-access to a search service](#assign-roles-for-development)+
+ | Task | Role | ID|
+ | | | |
+ | CRUD operations | `Search Service Contributor`|7ca78c08-252a-4471-8644-bb5ff32d4ba0|
+ | Load documents, run indexing jobs | `Search Index Data Contributor`|8ebe5a00-799e-43f5-93ac-243d3dce84a7|
+ | Query an index | `Search Index Data Reader`|1407120a-92aa-4202-b7e9-c0e197c71c8f|
+ + [Read-only access for queries](#assign-roles-for-read-only-queries)
+ | Role | ID|
+ | | |
+ | `Search Index Data Reader` [with PowerShell](search-security-rbac.md?tabs=roles-portal-admin%2Croles-portal%2Croles-portal-query%2Ctest-portal%2Ccustom-role-portal#grant-access-to-a-single-index)|1407120a-92aa-4202-b7e9-c0e197c71c8f|
+ ### Assign roles for service administration As a service administrator, you can create and configure a search service, and perform all control plane operations described in the [Management REST API](/rest/api/searchmanagement/) or equivalent client libraries. Depending on the role, you can also perform most data plane [Search REST API](/rest/api/searchservice/) tasks. + #### [**Azure portal**](#tab/roles-portal-admin) 1. Sign in to the [Azure portal](https://portal.azure.com).
New-AzRoleAssignment -SignInName <email> `
Role assignments are global across the search service. To [scope permissions to a single index](#rbac-single-index), use PowerShell or the Azure CLI to create a custom role.
-> [!NOTE]
-> If you make a request, such as a REST call, that includes an API key, and you're also part of a role assignment, the API key takes precedence. For instance, if you have a read-only role assignment but make a request with an admin API key, the permissions granted by the API key override the role assignment. This behavior only applies if both keys and roles are enabled for your search service.
+Another combination of roles that provides full access is Contributor or Owner, plus Search Index Data Reader.
+
+> [!IMPORTANT]
+> If you configure role-based access for a service or index and you also provide an API key on the request, the search service uses the API key to authenticate.
#### [**Azure portal**](#tab/roles-portal)
search Vector Search Index Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md
Quotas for both storage and vector index size increase or decrease as you add or
### [**REST**](#tab/rest-vector-quota)
-Use the following data plane REST APIs (version 2023-10-01-preview, 2023-11-01, and later) for vector usage statistics:
+Data plane REST APIs (version 2023-10-01-preview, 2023-11-01, and all newer APIs) provide vector usage statistics:
+ [GET Service Statistics](/rest/api/searchservice/get-service-statistics/get-service-statistics) returns quota and usage for the search service all-up. + [GET Index Statistics](/rest/api/searchservice/indexes/get-statistics) returns usage for a given index.
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Within an index definition, you can specify one or more algorithms, and then for
+ [Create a vector store](vector-search-how-to-create-index.md) to specify an algorithm in the index and on fields.
-+ For exhaustive KNN, use [2023-11-01](/rest/api/searchservice/indexes/create-or-update), [2023-10-01-Preview](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true), or Azure SDK beta libraries that target either REST API version.
++ For exhaustive KNN, use [2023-11-01](/rest/api/searchservice/indexes/create-or-update), [2023-10-01-Preview](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) and all newer preview APIs, or Azure SDK beta libraries that target either REST API version. Algorithm parameters that are used to initialize the index during index creation are immutable and can't be changed after the index is built. However, parameters that affect the query-time characteristics (`efSearch`) can be modified.
search Vector Search Vectorizer Azure Open Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-vectorizer-azure-open-ai.md
Last updated 05/28/2024
# Azure OpenAI vectorizer > [!IMPORTANT]
-> This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2023-10-01-Preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) and later preview REST APIs support this feature.
+> This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2023-10-01-Preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) and all newer preview REST APIs support this feature.
The **Azure OpenAI** vectorizer connects to a deployed embedding model on your [Azure OpenAI](/azure/ai-services/openai/overview) resource to generate embeddings at query time. Your data is processed in the [Geo](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) where your model is deployed.
search Vector Search Vectorizer Custom Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-vectorizer-custom-web-api.md
Last updated 05/28/2024
# Custom Web API vectorizer > [!IMPORTANT]
-> This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2023-10-01-Preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) and later preview REST APIs support this feature.
+> This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [2023-10-01-Preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) and all newer preview REST APIs support this feature.
The **custom web API** vectorizer allows you to configure your search queries to call out to a Web API endpoint to generate embeddings at query time. The structure of the JSON payload required to be implemented in the provided endpoint is described further down in this document. Your data is processed in the [Geo](https://azure.microsoft.com/explore/global-infrastructure/data-residency/) where your model is deployed.
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
description: Certificate Authority details for Azure services that utilize x509
- Previously updated : 04/19/2024 Last updated : 06/23/2024
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
|- |- | | [DigiCert Basic RSA CN CA G2](https://crt.sh/?d=2545289014) | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | | [DigiCert Cloud Services CA-1](https://crt.sh/?d=12624881) | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 |
+| [DigiCert Cloud Services CA-1](https://crt.sh/?q=B3F6B64A07BB9611F47174407841F564FB991F29) | 0f171a48c6f223809218cd2ed6ddc0e8<br>b3f6b64a07bb9611f47174407841f564fb991f29 |
| [DigiCert SHA2 Secure Server CA](https://crt.sh/?d=3422153451) | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C | | [DigiCert TLS Hybrid ECC SHA384 2020 CA1](https://crt.sh/?d=3422153452) | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | | [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD |
+| [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?q=6938FD4D98BAB03FAADB97B34396831E3780AEA1) | 0a3508d55c292b017df8ad65c00ff7e4<br>6938fd4d98bab03faadb97b34396831e3780aea1 |
| [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF |
-| [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer) | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 |
-| [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 |
-| [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 |
-| [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 |
| [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x01529ee8368f0b5d72ba433e2d8ea62d<br>56D955C849887874AA1767810366D90ADF6C8536 | | [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 | | [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x02393d48d702425a7cb41c000b0ed7ca<br>FB73FDC24F06998E070A06B6AFC78FDF2A155B25 | | [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004.crt) | 0x33000000322164aedab61f509d000000000032<br>406E3B38EFF35A727F276FE993590B70F8224AED |
-| [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 |
-| [Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161) | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 |
-| [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 |
-| [Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228) | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 |
| [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0f1f157582cdcd33734bdc5fcd941a33<br>3BE6CA5856E3B9709056DA51F32CBC8970A83E28 | | [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007.crt) | 0x3300000034c732435db22a0a2b000000000034<br>AB3490B7E37B3A8A1E715036522AB42652C3CFFE | | [Microsoft Azure ECC TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0ef2e5d83681520255e92c608fbc2ff4<br>716DF84638AC8E6EEBE64416C8DD38C2A25F6630 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 | | [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0efb7e547edf0ff1069aee57696d7ba0<br>31600991ED5FEC63D355A5484A6DCC787EAD89BC | | [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008.crt) | 0x330000003a5dc2ffc321c16d9b00000000003a<br>512C8F3FB71EDACF7ADA490402E710B10C73026E |
-| [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 |
-| [Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024) | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 |
-| [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA |
-| [Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032) | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 |
-| [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 |
-| [Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057) | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 |
-| [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 |
-| [Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106) | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 |
| [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) | 0x33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 | | [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) | 0x33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 | | [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) | 0x330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD | | Γöö [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF | | [**DigiCert Global Root G2**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 |
-| Γöö [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 |
-| Γöö [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA |
| Γöö [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x05196526449a5e3d1a38748f5dcfebcc<br>F9388EA2C9B7D632B66A2B0B406DF1D37D3901F6 | | Γöö [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x09f96ec295555f24749eaf1e5dced49d<br>BE68D0ADAA2345B48E507320B695D386080E5B25 | | Γöö [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0a43a9509b01352f899579ec7208ba50<br>3382517058A0C20228D598EE7501B61256A76442 | | Γöö [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0efb7e547edf0ff1069aee57696d7ba0<br>31600991ED5FEC63D355A5484A6DCC787EAD89BC |
-| Γöö [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 |
-| Γöö [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 |
| [**DigiCert Global Root G3**](https://cacerts.digicert.com/DigiCertGlobalRootG3.crt) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer) | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x01529ee8368f0b5d72ba433e2d8ea62d<br>56D955C849887874AA1767810366D90ADF6C8536 | | Γöö [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x02393d48d702425a7cb41c000b0ed7ca<br>FB73FDC24F06998E070A06B6AFC78FDF2A155B25 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0f1f157582cdcd33734bdc5fcd941a33<br>3BE6CA5856E3B9709056DA51F32CBC8970A83E28 | | Γöö [Microsoft Azure ECC TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0ef2e5d83681520255e92c608fbc2ff4<br>716DF84638AC8E6EEBE64416C8DD38C2A25F6630 | | [**Microsoft ECC Root Certificate Authority 2017**](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 | | Γöö [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004.crt) | 0x33000000322164aedab61f509d000000000032<br>406E3B38EFF35A727F276FE993590B70F8224AED |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161) | 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4 |
-| Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228) | 0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007.crt) | 0x3300000034c732435db22a0a2b000000000034<br>AB3490B7E37B3A8A1E715036522AB42652C3CFFE | | Γöö [Microsoft Azure ECC TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008.crt) | 0x3300000031526979844798bbb8000000000031<br>CF33D5A1C2F0355B207FCE940026E6C1580067FD | | Γöö [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) |33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004.crt) | 0x330000003cd7cb44ee579961d000000000003c<br>7304022CA8A9FF7E3E0C1242E0110E643822C45E | | Γöö [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 | | Γöö [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008.crt) | 0x330000003a5dc2ffc321c16d9b00000000003a<br>512C8F3FB71EDACF7ADA490402E710B10C73026E |
-| Γöö [Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024) | 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3 |
-| Γöö [Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032) | 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08 |
-| Γöö [Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057) | 0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87 |
-| Γöö [Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106) | 0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6 |
| Γöö [Microsoft RSA TLS Issuing AOC CA 01](https://crt.sh/?d=4789678141) |330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee | | Γöö [Microsoft RSA TLS Issuing AOC CA 02](https://crt.sh/?d=4814787092) |3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e | | Γöö [Microsoft RSA TLS Issuing EOC CA 01](https://crt.sh/?d=4814787098) |33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 |
Microsoft updated Azure services to use TLS certificates from a different set of
### Article change log
+- June 27, 2024: Removed the following CAs, which were superseded by both versions of Microsoft Azure ECC TLS Issuing CAs 03, 04, 07, 08.
+
+ | Certificate Authority | Serial Number<br>Thumbprint |
+ |- |- |
+ | [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer)|0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0|
+ |[Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805)|0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268|
+ |[Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer)|0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1|
+ |[Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233)|0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6|
+ |[Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer)|x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531|
+ |[Microsoft Azure ECC TLS Issuing CA 05](https://crt.sh/?d=2616326161)| 0x330000001cc0d2a3cd78cf2c1000000000001c<br>4C15BC8D7AA5089A84F2AC4750F040D064040CD4|
+ |[Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer)|0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163|
+ |[Microsoft Azure ECC TLS Issuing CA 06](https://crt.sh/?d=2616326228)|0x330000001d0913c309da3f05a600000000001d<br>DFEB65E575D03D0CC59FD60066C6D39421E65483|
+ |[Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer)| 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173|
+ |[Microsoft Azure TLS Issuing CA 01](https://crt.sh/?d=2616326024)| 0x1dbe9496f3db8b8de700000000001d<br>B9ED88EB05C15C79639493016200FDAB08137AF3|
+ |[Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer)| 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA|
+ |[Microsoft Azure TLS Issuing CA 02](https://crt.sh/?d=2616326032)| 0x330000001ec6749f058517b4d000000000001e<br>C5FB956A0E7672E9857B402008E7CCAD031F9B08|
+ |[Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer)| 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5|
+ |[Microsoft Azure TLS Issuing CA 05](https://crt.sh/?d=2616326057)|0x330000001f9f1fa2043bc28db900000000001f<br>56F1CA470BB94E274B516A330494C792C419CF87|
+ |[Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer)| 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0|
+ |[Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106)|0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6|
- July 17, 2023: Added 16 new subordinate Certificate Authorities - February 7, 2023: Added eight new subordinate Certificate Authorities
sentinel Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/automation.md
After onboarding your Microsoft Sentinel workspace to the unified security opera
| **Microsoft incident creation rules** | Microsoft incident creation rules aren't supported in the unified security operations platform. <br><br>For more information, see [Microsoft Defender XDR incidents and Microsoft incident creation rules](../microsoft-365-defender-sentinel-integration.md#microsoft-defender-xdr-incidents-and-microsoft-incident-creation-rules). | | **Running automation rules from the Defender portal** | It might take up to 10 minutes from the time that an alert is triggered and an incident is created or updated in the Defender portal to when an automation rule is run. This time lag is because the incident is created in the Defender portal and then forwarded to Microsoft Sentinel for the automation rule. | | **Active playbooks tab** | After onboarding to the unified security operations platform, by default the **Active playbooks** tab shows a predefined filter with onboarded workspace's subscription. In the Azure portal, add data for other subscriptions using the subscription filter. <br><br>For more information, see [Create and customize Microsoft Sentinel playbooks from content templates](use-playbook-templates.md). |
-| **Running playbooks manually on demand** | The following procedures aren't currently supported in the unified security operations platform: <br><li>[Run a playbook manually on an alert](run-playbooks.md#run-a-playbook-manually-on-an-alert)<br><li>[Run a playbook manually on an entity (Preview)](run-playbooks.md#run-a-playbook-manually-on-an-entity-preview) |
+| **Running playbooks manually on demand** | The following procedures aren't currently supported in the unified security operations platform: <br><li>[Run a playbook manually on an alert](run-playbooks.md#run-a-playbook-manually-on-an-alert)<br><li>[Run a playbook manually on an entity](run-playbooks.md#run-a-playbook-manually-on-an-entity) |
| **Running playbooks on incidents requires Microsoft Sentinel sync** | If you try to run a playbook on an incident from the unified security operations platform and see the message *"Can't access data related to this action. Refresh the screen in a few minutes."* message, this means that the incident isn't yet synchronized to Microsoft Sentinel. <br><br>Refresh the incident page after the incident is synchronized to run the playbook successfully. |
After onboarding your Microsoft Sentinel workspace to the unified security opera
- [Automate threat response in Microsoft Sentinel with automation rules](../automate-incident-handling-with-automation-rules.md) - [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)-- [Create and use Microsoft Sentinel automation rules to manage response](../create-manage-use-automation-rules.md)
+- [Create and use Microsoft Sentinel automation rules to manage response](../create-manage-use-automation-rules.md)
sentinel Run Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation/run-playbooks.md
Beginning **June 2023**, you can no longer add playbooks to analytics rules in t
## Run a playbook manually, on demand
-You can also manually run a playbook on demand, whether in response to alerts, incidents (in preview), or entities (also in preview). This can be useful in situations where you want more human input into and control over orchestration and response processes.
+You can also manually run a playbook on demand, whether in response to alerts, incidents, or entities. This can be useful in situations where you want more human input into and control over orchestration and response processes.
### Run a playbook manually on an alert
In the Azure portal, select one of the following tabs as needed for your environ
You can see the run history for playbooks on an alert by selecting the **Runs** tab on the **Alert playbooks** pane. It might take a few seconds for any just-completed run to appear in the list. Selecting a specific run opens the full run log in Logic Apps.
-### Run a playbook manually on an incident (preview)
+### Run a playbook manually on an incident
This procedure differs, depending on if you're working in Microsoft Sentinel or in the unified security operations platform. Select the relevant tab for your environment:
This procedure differs, depending on if you're working in Microsoft Sentinel or
1. In the **Incidents** page, select an incident.
-1. From the incident details pane that appears on the side, select **Actions > Run playbook (Preview)**.
+1. From the incident details pane that appears on the side, select **Actions > Run playbook**.
Selecting the three dots at the end of the incident's line on the grid or right-clicking the incident displays the same list as the **Action** button.
This procedure differs, depending on if you're working in Microsoft Sentinel or
1. In the **Incidents** page, select an incident.
-1. From the incident details pane that appears on the side, select **Run Playbook (Preview)**.
+1. From the incident details pane that appears on the side, select **Run Playbook**.
1. The **Run playbook on incident** panel opens on the side, with all related playbooks for the selected incident. In the **Action** column, select **Run playbook** for the playbook you want to run immediately.
The **Actions** column might also show one of the following statuses:
View the run history for playbooks on an incident by selecting the **Runs** tab on the **Run playbook on incident** panel. It might take a few seconds for any just-completed run to appear in the list. Selecting a specific run opens the full run log in Logic Apps.
-### Run a playbook manually on an entity (preview)
+### Run a playbook manually on an entity
This procedure isn't supported in the unified security operations platform.
Select an entity in one of the following ways, depending on your originating con
In the **Entities** widget in the **Overview** tab, locate your entity, and do one of the following: -- Don't select the entity. Instead, select the three dots to the right of the entity, and then select **Run playbook (Preview)**. Locate the playbook you want to run, and select **Run** in that playbook's row.
+- Don't select the entity. Instead, select the three dots to the right of the entity, and then select **Run playbook**. Locate the playbook you want to run, and select **Run** in that playbook's row.
- Select the entity to open the **Entities tab** of the incident details page. Locate your entity on the list, and select the three dots to the right. Locate the playbook you want to run, and select **Run** in that playbook's row. -- Select an entity and drill down to the entity details page. Then, select the **Run playbook (Preview)** button in the left-hand panel. Locate the playbook you want to run, and select **Run** in that playbook's row.
+- Select an entity and drill down to the entity details page. Then, select the **Run playbook** button in the left-hand panel. Locate the playbook you want to run, and select **Run** in that playbook's row.
#### [Incident details page (legacy)](#tab/incident-details-legacy)
In the **Entities** widget in the **Overview** tab, locate your entity, and do o
1. Do one of the following:
- - Select the **Run playbook (Preview)** link at the end of the entity line in the list.
- - Select the entity to drill down to the entity details page and select the **Run playbook (Preview)** button in the left-hand panel.
+ - Select the **Run playbook** link at the end of the entity line in the list.
+ - Select the entity to drill down to the entity details page and select the **Run playbook** button in the left-hand panel.
1. Locate the playbook you want to run, and select **Run** in that playbook's row.
In the **Entities** widget in the **Overview** tab, locate your entity, and do o
**If you're in the Investigation graph:**
-1. Select an entity in the graph and then select the **Run playbook (Preview)** button in the entity side panel.
+1. Select an entity in the graph and then select the **Run playbook** button in the entity side panel.
- For some entity types, you might have to select the **Entity actions** button and from the resulting menu select **Run playbook (Preview)**.
+ For some entity types, you might have to select the **Entity actions** button and from the resulting menu select **Run playbook**.
1. Locate the playbook you want to run, and select **Run** in that playbook's row.
In the **Entities** widget in the **Overview** tab, locate your entity, and do o
**If you're proactively hunting for threats:** 1. From the **Entity behavior** screen, select an entity from the lists on the page, or search for and select another entity.
-1. In the [entity page](../entity-pages.md), select the **Run playbook (Preview)** button in the left-hand panel.
+1. In the [entity page](../entity-pages.md), select the **Run playbook** button in the left-hand panel.
1. Locate the playbook you want to run, and select **Run** in that playbook's row.
On the **Run playbook on *\<entity type>** pane, select the **Runs** tab to see
For more information, see: - [Create and manage Microsoft Sentinel playbooks](create-playbooks.md)-- [Automate threat response in Microsoft Sentinel with automation rules](../automate-incident-handling-with-automation-rules.md)
+- [Automate threat response in Microsoft Sentinel with automation rules](../automate-incident-handling-with-automation-rules.md)
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
description: Learn how to create a codeless connector in Microsoft Sentinel usin
Previously updated : 03/06/2024 Last updated : 06/26/2024
-# Create a codeless connector for Microsoft Sentinel (Public preview)
+# Create a codeless connector for Microsoft Sentinel
The Codeless Connector Platform (CCP) provides partners, advanced users, and developers the ability to create custom connectors for ingesting data to Microsoft Sentinel. Connectors created using the CCP are fully SaaS, with no requirements for service installations. They also include [health monitoring](monitor-data-connector-health.md) and full support from Microsoft Sentinel.
-> [!IMPORTANT]
-> The Codeless Connector Platform (CCP) is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
- **Use the following steps to create your CCP connector and connect your data source to Microsoft Sentinel** > [!div class="checklist"]
sentinel Customize Alert Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customize-alert-details.md
Follow the procedure detailed below to use the alert details feature. These step
1. To override more default properties, select **+ Add new** and repeat the previous step. The following properties can be overridden:
- |Name |Description |
- |||
- |**AlertName** | String |
- |**Description** | String |
- |**AlertSeverity** | One of the following values: <br>- **Informational**<br>- **Low**<br>- **Medium**<br>- **High** |
- |**Tactics** | One of the following values: <br>- **Reconnaissance**<br>- **ResourceDevelopment**<br>- **InitialAccess**<br>- **Execution**<br> - **Persistence**<br>- **PrivilegeEscalation**<br>- **DefenseEvasion**<br>- **CredentialAccess** <br>- **Discovery**<br> - **LateralMovement**<br>- **Collection**<br>- **Exfiltration**<br>- **CommandAndControl**<br>- **Impact**<br> - **PreAttack**<br>- **ImpairProcessControl**<br>- **InhibitResponseFunction** |
- |**Techniques** (Preview) | A string that matches the following regular expression: `^T(?<Digits>\d{4})$`. <br>For example: **T1234** |
- |**AlertLink** (Preview) | String |
- |**ConfidenceLevel** (Preview) | One of the following values: <br>- **Low**<br>- **High**<br>- **Unknown** |
- |**ConfidenceScore** (Preview) | Integer, between **0**-**1** (inclusive) |
- |**ExtendedLinks** (Preview) | String |
- |**ProductComponentName** (Preview) | String |
- |**ProductName** (Preview) | String |
- |**ProviderName** (Preview) | String |
- |**RemediationSteps** (Preview) | String |
+ | Name | Description |
+ | - | -- |
+ | **AlertName** | String |
+ | **Description** | String |
+ | **AlertSeverity** | One of the following values: <br>- **Informational**<br>- **Low**<br>- **Medium**<br>- **High** |
+ | **Tactics** | One of the following values: <br>- **Reconnaissance**<br>- **ResourceDevelopment**<br>- **InitialAccess**<br>- **Execution**<br>- **Persistence**<br>- **PrivilegeEscalation**<br>- **DefenseEvasion**<br>- **CredentialAccess**<br>- **Discovery**<br>- **LateralMovement**<br>- **Collection**<br>- **Exfiltration**<br>- **CommandAndControl**<br>- **Impact**<br>- **PreAttack**<br>- **ImpairProcessControl**<br>- **InhibitResponseFunction** |
+ | **Techniques** (Preview) | A string that matches the following regular expression: `^T(?<Digits>\d{4})$`. <br>For example: **T1234** |
+ | **AlertLink** (Preview) | String |
+ | **ConfidenceLevel** (Preview) | One of the following values: <br>- **Low**<br>- **High**<br>- **Unknown** |
+ | **ConfidenceScore** (Preview) | Integer, between **0**-**1** (inclusive) |
+ | **ExtendedLinks** (Preview) | String |
+ | **ProductComponentName** (Preview) | String |
+ | **ProductName** (Preview) | String |
+ | **ProviderName** (Preview) | String |
+ | **RemediationSteps** (Preview) | String |
If you change your mind, or if you made a mistake, you can remove an alert detail by clicking the trash can icon next to the **Alert property/Value** pair, or delete the free text from the **Alert Name/Description Format** fields.
Follow the procedure detailed below to use the alert details feature. These step
> [!NOTE] > > **Service limits**
- > - The combined size limit for all alert details and [custom details](surface-custom-details-in-alerts.md), collectively, is **64 KB**.
+ > - You can override a field with **up to 50 values**. Values past the 50th are dropped.
+ > - The size limit for the AlertName field, and any other non-collection properties, is **256 bytes**.
+ > - The size limit for the Description field, and any other collection properties, is **5 KB**.
+ > - Values exceeding the size limits are dropped.
## Next steps
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md
Install the **Microsoft Defender XDR** solution for Microsoft Sentinel from the
To onboard Microsoft Sentinel to the unified security operations platform in the Defender portal, see [Connect Microsoft Sentinel to Microsoft Defender XDR](/defender-xdr/microsoft-sentinel-onboard).
-After you configure the Defender XDR data connector, Defender XDR incidents appear in the Microsoft Sentinel incidents queue, with **Microsoft Defender XDR** (or one of the component services' names) in the **Alert product name** field, shortly after they're generated in Defender XDR.
-
+After you enable alert and incident collection in the Defender XDR data connector, Defender XDR incidents appear in the Microsoft Sentinel incidents queue shortly after they're generated in Defender XDR. In these incidents, the **Alert product name** field contains **Microsoft Defender XDR** or one of the component Defender services' names.
- It can take up to 10 minutes from the time an incident is generated in Defender XDR to the time it appears in Microsoft Sentinel. - Alerts and incidents from Defender XDR (those items that populate the *SecurityAlert* and *SecurityIncident* tables) are ingested into and synchronized with Microsoft Sentinel at no charge. For all other data types from individual Defender components (such as the *Advanced hunting* tables *DeviceInfo*, *DeviceFileEvents*, *EmailEvents*, and so on), ingestion is charged.
After you configure the Defender XDR data connector, Defender XDR incidents appe
The exception to this process is Microsoft Defender for Cloud. Although its integration with Defender XDR means that you receive Defender for Cloud *incidents* through Defender XDR, you need to also have a Microsoft Defender for Cloud connector enabled in order to receive Defender for Cloud *alerts*. For the available options and more information, see the following articles: - [Microsoft Defender for Cloud in the Microsoft Defender portal](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud)
- - [Ingest Microsoft Defender for Cloud incidents with Microsoft Defender XDR integration](ingest-defender-for-cloud-incidents.md)
+ - [Ingest Microsoft Defender for Cloud incidents with Microsoft Defender XDR integration](ingest-defender-for-cloud-incidents.md)
- Similarly, to avoid creating *duplicate incidents for the same alerts*, the **Microsoft incident creation rules** setting is turned off for Defender XDR-integrated products when connecting Defender XDR. This is because Defender XDR has its own incident creation rules. This change has the following potential impacts:
In Defender XDR, all alerts from one incident can be transferred to another, res
## Advanced hunting event collection
-The Defender XDR connector also lets you stream **advanced hunting** events - a type of raw event data - from Defender XDR and its component services into Microsoft Sentinel. Collect [advanced hunting](/microsoft-365/security/defender/advanced-hunting-overview) events from all Defender XDR components, and stream them straight into purpose-built tables in your Microsoft Sentinel workspace. These tables are built on the same schema that is used in the Defender portal. This gives you complete access to the full set of advanced hunting events, and allows you to do the following tasks:
+The Defender XDR connector also lets you stream **advanced hunting** events&mdash;a type of raw event data&mdash;from Defender XDR and its component services into Microsoft Sentinel. Collect [advanced hunting](/microsoft-365/security/defender/advanced-hunting-overview) events from all Defender XDR components, and stream them straight into purpose-built tables in your Microsoft Sentinel workspace. These tables are built on the same schema that is used in the Defender portal. This gives you complete access to the full set of advanced hunting events, and allows you to do the following tasks:
- Easily copy your existing Microsoft Defender for Endpoint/Office 365/Identity/Cloud Apps advanced hunting queries into Microsoft Sentinel.
sentinel Deploy Sap Btp Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-btp-solution.md
To set up the BTP account and the solution:
Here are examples of these field values: - **url**: `https://auditlog-management.cfapps.us10.hana.ondemand.com`
- - **uaa.clientid**: `sb-ac79fee5-8ad0-4f88-be71-d3f9c566e73a!b136532|auditlog-management!b1237`
- - **uaa.clientsecret**: `682323d2-42a0-45db-a939-74639efde986$gR3x3ohHTB8iyYSKHW0SNIWG4G0tQkkMdBwO7lKhwcQ=`
- - **uaa.url**: `https://915a0312trial.authentication.us10.hana.ondemand.com`
+ - **uaa.clientid**: `00001111-aaaa-2222-bbbb-3333cccc4444|auditlog-management!b1237`
+ - **uaa.clientsecret**: `aaaaaaaa-0b0b-1c1c-2d2d-333333333333`
+ - **uaa.url**: `https://trial.authentication.us10.hana.ondemand.com`
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to the Microsoft Sentinel service.
sentinel Sentinel Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-service-limits.md
The following limit applies to analytics rules in Microsoft Sentinel.
| [Entity mappings](map-data-fields-to-entities.md) | 10 mappings per rule | None | | [Entities](map-data-fields-to-entities.md) identified per alert<br>(Divided equally among the mapped entities) | 500 entities per alert | None | | [Entities](map-data-fields-to-entities.md) cumulative size limit | 64 KB | None |
-| [Custom details](surface-custom-details-in-alerts.md) | 20 details per rule | None |
-| [Custom details](surface-custom-details-in-alerts.md) and [alert details](customize-alert-details.md)<br>combined cumulative size limit | 64 KB | None |
+| [Custom details](surface-custom-details-in-alerts.md) | 20 details per rule<br>50 values per detail<br>2 KB cumulative size | None |
+| [Alert details](customize-alert-details.md) | 50 values per overridden field<br>5 KB per field for `Description` and collections<br>256 bytes per field for `AlertName` and non-collections | None |
| Alerts per rule<br>Applicable when *Event grouping* is set to *Trigger an alert for each event* | 150 alerts | None | | Alerts per rule for NRT rules | 30 alerts | None |
sentinel Soc Optimization Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-optimization/soc-optimization-access.md
Title: Optimize security operations
-description: Use SOC optimization recommendations to optimize your security operations center (SOC) team activities.
-
-ms.pagetype: security
+description: Use Microsoft Sentinel SOC optimization recommendations to optimize your security operations center (SOC) team activities.
- - m365-security
- - tier1
- usx-security Last updated 06/09/2024
From here, either select the options menu or select **View full details** to tak
- **Provide further feedback** to the Microsoft team. When sharing your feedback, be careful not to share any confidential data. For more information, see [Microsoft Privacy Statement](https://privacy.microsoft.com/privacystatement).
-## Use optimizations via API
-
-The `Recommendations` operation group provides access to SOC optimizations via the Azure REST API. For example, use the API to get details about a specific recommendation, or all current recommendations across your workspaces, or to reevaluate a recommendation if you've made changes.
-
-SOC optimization API documentation is available only in the Swagger specification and not in the REST API reference. For more information, see [API versions of Microsoft Sentinel REST APIs](/rest/api/securityinsights/api-versions).
- ## SOC optimization usage flow This section provides a sample flow for using SOC optimizations, from either the Defender or Azure portal:
This section provides a sample flow for using SOC optimizations, from either the
## Related content - [SOC optimization reference of recommendations](soc-optimization-reference.md)
+- [Use SOC optimizations programmatically](soc-optimization-api.md)
+- [Blog: SOC optimization: unlock the power of precision-driven security management](https://aka.ms/SOC_Optimization)
sentinel Soc Optimization Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-optimization/soc-optimization-api.md
+
+ Title: Use SOC optimizations programmatically
+description: Learn how to use Microsoft Sentinel SOC optimization recommendations programmatically.
+ms.pagetype: security
++++
+ - usx-security
+ Last updated : 06/09/2024
+appliesto:
+ - Microsoft Sentinel in the Microsoft Defender portal
+ - Microsoft Sentinel in the Azure portal
+#customerIntent: As a SOC engineer, I want to learn about about how to interact with SOC optimziation recommendations programmatically via API.
++
+# Using SOC optimizations programmatically (Preview)
+
+Use the Microsoft Sentinel `recommendations` API to programmatically interact with SOC optimization recommendations, helping you to close coverage gaps against specific threats and tighten ingestion rates. You can get details about all current recommendations across your workspaces or a specific SOC optimization recommendation, or you can reevaluate a recommendation if you've made changes in your environment.
+
+For example, use the `recommendations` API to:
+
+- Build custom reports and dashboards. For example, see [Visualize custom SOC optimization data](#visualize-custom-soc-optimization-data).
+- Integrate with third-party tools, such as for SOAR and ITSM services
+- Get automated, real-time access to SOC optimization data, triggering evaluations and responding promptly to the suggestions
+
+For customers or MSSPs managing multiple environments, the `recommendations` API provides a scalable way to handle recommendations across multiple workspaces. You can also export data from the API and store it externally for audit, archiving, or tracking trends.
+
+> [!IMPORTANT]
+> [!INCLUDE [unified-soc-preview-without-alert](../includes/unified-soc-preview-without-alert.md)]
+>
+> The `recommendations` API is in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Get, update, or reevaluate recommendations
+
+Use the following examples of the `recommendations` API to interact with SOC optimization recommendations programmatically:
+
+- **Get a list of all current SOC optimization recommendations in your workspace**:
+
+ ```rest
+ GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/providers/Microsoft.SecurityInsights/recommendations
+ ```
+
+- **Get a specific recommendation by recommendation ID**:
+
+ ```rest
+ GET /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/providers/Microsoft.SecurityInsights/recommendations/{recommendationId}
+ ```
+
+ Find a recommendation's ID value by first getting a list of all recommendations in your workspace.
+
+- **Update a recommendation's status to *Active*, *In Progress*, *Completed*, *Dismissed*, or *Reactivate***:
+
+ ```rest
+ PATCH /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/providers/Microsoft.SecurityInsights/recommendations/{recommendationId}
+ ```
+
+- **Manually trigger an evaluation for a specific recommendation**:
+
+ ```rest
+ POST /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/providers/Microsoft.SecurityInsights/recommendations/{recommendationId} /triggerEvaluation
+ ```
+
+## Visualize custom SOC optimization data
+
+The **Microsoft Sentinel Optimization Workbook** uses the `recommendations` API to visualize SOC optimization data. Install and customize the workbook in your workspace to create your own custom SOC optimization dashboard.
+
+In the **Microsoft Sentinel Optimization Workbooks**, select the **SOC Optimization** tab and expand the items under **Details** to drill down into to view SOC optimization data. Edit the workbook to modify the data shown as needed for your organization.
+
+For example:
++
+For more information, see:
+
+- [Discover and manage Microsoft Sentinel out-of-the-box content](../sentinel-solutions-deploy.md)
+- [Visualize and monitor your data by using workbooks in Microsoft Sentinel](../monitor-your-data.md).
+
+## Related content
+
+For more information, see:
+
+- [Optimize your security operations](soc-optimization-access.md)
+- [SOC optimization reference of recommendations](soc-optimization-reference.md)
+- Blogs: [Introducing the SOC Optimization API](https://aka.ms/SocOptimizationAPI) | [Unlock the power of precision-driven security management](https://aka.ms/SOC_Optimization)
sentinel Soc Optimization Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-optimization/soc-optimization-reference.md
Title: SOC optimization reference
-description: Learn about the SOC optimization recommendations available to help you optimize your security operations.
-
-ms.pagetype: security
+description: Learn about the Microsoft Sentinel SOC optimization recommendations available to help you optimize your security operations.
- - m365-security
- - tier1
- usx-security Last updated 06/09/2024
The following table lists the available threat-based SOC optimization recommenda
|There are no existing detections or data sources. | Connect detections and data sources or install a solution. |
+## Related content
+
+- [Using SOC optimizations programmatically (Preview)](soc-optimization-api.md)
+- [Blog: SOC optimization: unlock the power of precision-driven security management](https://aka.ms/SOC_Optimization)
+ ## Next step - [Access SOC optimization](soc-optimization-access.md)
sentinel Surface Custom Details In Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/surface-custom-details-in-alerts.md
The procedure detailed below is part of the analytics rule creation wizard. It's
> [!NOTE] > > **Service limits**
- > - You can define **up to 20 custom details** in a single analytics rule.
+ > - You can define **up to 20 custom details** in a single analytics rule. Each custom detail can contain **up to 50 values**.
>
- > - The combined size limit for all custom details and [alert details](customize-alert-details.md), collectively, is **64 KB**.
+ > - The combined size limit for all custom details and their values in a single alert is **2 KB**. Values in excess of this limit are dropped.
## Next steps
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## June 2024
+- [Codeless Connector Platform now generally available](#codeless-connector-platform-now-generally-available)
- [Advanced threat indicator search capability available](#advanced-threat-indicator-search-capability-available)
+### Codeless Connector Platform now generally available
+
+The Codeless Connector Platform (CCP), is now generally available (GA). Check out the [announcement blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-create-your-own-codeless-data-connector/ba-p/4174439).
+
+For more information on the CCP enhancements and capabilities, see [Create a codeless connector for Microsoft Sentinel](create-codeless-connector.md).
+ ### Advanced threat indicator search capability available Threat intelligence search and filtering capabilities have been enhanced, and the experience now has parity across the Microsoft Sentinel and Microsoft Defender portals. Search supports a maximum of 10 conditions with each containing up to 3 subclauses.
service-bus-messaging Service Bus Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-authentication-and-authorization.md
This article gives you details on using these two types of security mechanisms.
<a name='azure-active-directory'></a> ## Microsoft Entra ID
-Microsoft Entra integration with Service Bus provides role-based access control (RBAC) to Service Bus resources. You can use Azure RBAC to grant permissions to a security principal, which can be a user, a group, an application service principal, or a managed identity. Microsoft Entra authenticates the security principal and returns an OAuth 2.0 token. This token can be used to authorize a request to access a Service Bus resource (queue, topic, and so on).
+Microsoft Entra integration with Service Bus provides role-based access control (RBAC) to Service Bus resources. You can use Azure RBAC to grant permissions to a security principal, which can be a user, a group, an application service principal, or a managed identity. Microsoft Entra authenticates the security principal and returns an OAuth 2.0 token. This token can be used to authorize a request to access a Service Bus resource (queue, topic, and subscription).
For more information about authenticating with Microsoft Entra ID, see the following articles:
For more information about authenticating with Microsoft Entra ID, see the follo
## Shared access signature [SAS authentication](service-bus-sas.md) enables you to grant a user access to Service Bus resources, with specific rights. SAS authentication in Service Bus involves the configuration of a cryptographic key with associated rights on a Service Bus resource. Clients can then gain access to that resource by presenting a SAS token, which consists of the resource URI being accessed and an expiry signed with the configured key.
-You can configure keys for SAS on a Service Bus namespace. The key applies to all messaging entities within that namespace. You can also configure keys on Service Bus queues and topics. To use SAS, you can configure a shared access authorization rule on a namespace, queue, or topic. This rule consists of the following elements:
+You can configure shared access policies on a Service Bus namespace. The key applies to all messaging entities within that namespace. You can also configure shared access policies on Service Bus queues and topics. To use SAS, you can configure a shared access authorization rule on a namespace, queue, or topic. This rule consists of the following elements:
* **KeyName**: identifies the rule. * **PrimaryKey**: a cryptographic key used to sign/validate SAS tokens.
Authorization rules configured at the namespace level can grant access to all en
To access an entity, the client requires a SAS token generated using a specific shared access authorization rule. The SAS token is generated using the HMAC-SHA256 of a resource string that consists of the resource URI to which access is claimed, and an expiry with a cryptographic key associated with the authorization rule.
-SAS authentication support for Service Bus is included in the Azure .NET SDK versions 2.0 and later. SAS includes support for a shared access authorization rule. All APIs that accept a connection string as a parameter include support for SAS connection strings.
+SAS authentication support for Service Bus is included in the Azure .NET SDK versions 2.0 and later. SAS includes support for a shared access authorization rule. All APIs that accept a connection string as a parameter include support for SAS connection strings.
+For detailed information on using SAS for authentication, see [Authentication with Shared Access Signatures](service-bus-sas.md).
-## Next steps
+
+## Related content
For more information about authenticating with Microsoft Entra ID, see the following articles: - [Authentication with managed identities](service-bus-managed-service-identity.md)
service-bus-messaging Service Bus Manage With Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-manage-with-ps.md
Microsoft Azure PowerShell is a scripting environment that you can use to contro
You can also manage Service Bus entities using Azure Resource Manager templates. For more information, see the article [Create Service Bus resources using Azure Resource Manager templates](service-bus-resource-manager-overview.md). ## Prerequisites
service-bus-messaging Service Bus Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-managed-service-identity.md
Here are the high-level steps to use a managed identity to access a Service Bus
- [Configure managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md) - [Configure managed identities for Azure resources on a VM](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) 1. Assign Azure Service Bus Data Owner, Azure Service Bus Data Sender, or Azure Service Bus Data Receiver role to the managed identity at the appropriate scope (Azure subscription, resource group, Service Bus namespace, or Service Bus queue or topic). For instructions to assign a role to a managed identity, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.yml).
-1. In your application, use the managed identity and the endpoint to Service Bus namespace to connect to the namespace. For example, in .NET, you use the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient.-ctor#azure-messaging-servicebus-servicebusclient-ctor(system-string-azure-core-tokencredential)) constructor that takes `TokenCredential` and `fullyQualifiedNamespace` (a string, for example: `cotosons.servicebus.windows.net`) parameters to connect to Service Bus using the managed identity. You pass in [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential), which derives from `TokenCredential` and uses the managed identity.
+1. In your application, use the managed identity and the endpoint to Service Bus namespace to connect to the namespace.
+
+ For example, in .NET, you use the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient.-ctor#azure-messaging-servicebus-servicebusclient-ctor(system-string-azure-core-tokencredential)) constructor that takes `TokenCredential` and `fullyQualifiedNamespace` (a string, for example: `cotosons.servicebus.windows.net`) parameters to connect to Service Bus using the managed identity. You pass in [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential), which derives from `TokenCredential` and uses the managed identity. In `DefaultAzureCredentialOptions`, set the `ManagedIdentityClientId` to the ID of client's managed identity.
+
+ ```csharp
+ string fullyQualifiedNamespace = "<your Namespace>.servicebus.windows.net>";
+ string userAssignedClientId = "<your managed identity client ID>";
+
+ var credential = new DefaultAzureCredential(
+ new DefaultAzureCredentialOptions
+ {
+ ManagedIdentityClientId = userAssignedClientId
+ });
+
+ var sbusClient = new ServiceBusClient(fullyQualifiedNamespace, credential);
+ ```
> [!IMPORTANT] > You can disable local or SAS key authentication for a Service Bus namespace and allow only Microsoft Entra authentication. For step-by-step instructions, see [Disable local authentication](disable-local-authentication.md).
service-bus-messaging Service Bus Resource Manager Namespace Auth Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-auth-rule.md
For the complete template, see the [Service Bus authorization rule template][Ser
> > To check for the latest templates, visit the [Azure Quickstart Templates][Azure Quickstart Templates] gallery and search for **Service Bus**. ## What will you deploy?
service-bus-messaging Service Bus Resource Manager Namespace Queue Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-queue-bicep.md
This article shows how to use a Bicep file that creates a Service Bus namespace and a queue within that namespace. The article explains how to specify which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this Bicep file for your own deployments, or customize it to meet your requirements. ## Prerequisites
service-bus-messaging Service Bus Resource Manager Namespace Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-queue.md
This article shows how to use an Azure Resource Manager template (ARM template) that creates a Service Bus namespace and a queue within that namespace. The article explains how to specify which resources are deployed and how to define parameters that are specified when the deployment is executed. You can use this template for your own deployments, or customize it to meet your requirements. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
service-bus-messaging Service Bus Resource Manager Namespace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace.md
The following templates are also available for creating Service Bus namespaces:
* [Create a Service Bus namespace with queue and authorization rule](./service-bus-resource-manager-namespace-auth-rule.md) * [Create a Service Bus namespace with topic, subscription, and rule](./service-bus-resource-manager-namespace-topic-with-rule.md) If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
service-bus-messaging Service Bus Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-overview.md
Azure Resource Manager templates help you define the resources to deploy for a s
> [!NOTE] > The examples in this article show how to use Azure Resource Manager to create a Service Bus namespace and messaging entity (queue). For other template examples, visit the [Azure Quickstart Templates gallery][Azure Quickstart Templates gallery] and search for **Service Bus**. ## Service Bus Resource Manager templates
service-connector Quickstart Cli Aks Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-aks-connection.md
This quickstart shows you how to connect Azure Kubernetes Service (AKS) to other Cloud resources using Azure CLI and Service Connector. Service Connector lets you quickly connect compute services to cloud services, while managing your connection's authentication and networking settings. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
service-connector Quickstart Cli App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-app-service-connection.md
This quickstart describes the steps for creating a service connection in Azure App Service with the Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
service-connector Quickstart Cli Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-container-apps.md
This quickstart shows you how to connect Azure Container Apps to other Cloud res
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- At least one application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [create and deploy a container to Container Apps](../container-apps/quickstart-portal.md).
service-connector Quickstart Cli Functions Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-functions-connection.md
This quickstart shows you how to connect Azure Functions to other Cloud resources using Azure CLI and Service Connector. Service Connector lets you quickly connect compute services to cloud services, while managing your connection's authentication and networking settings. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Service Connector lets you quickly connect compute services to cloud services, w
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- At least one application hosted by Azure Spring Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [deploy your first application to Azure Spring Apps](../spring-apps/enterprise/quickstart.md).
service-connector Tutorial Java Jboss Connect Managed Identity Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-jboss-connect-managed-identity-mysql-database.md
> * Configure a Spring Boot web application to use Microsoft Entra authentication with MySQL Database. > * Connect to MySQL Database with Managed Identity using Service Connector. ## Prerequisites
service-fabric Create Load Balancer Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/create-load-balancer-rule.md
The load balancer deployed with your Azure Service Fabric cluster directs traffi
When you deployed your Service Fabric cluster to Azure, a load balancer was automatically created for you. If you do not have a load balancer, see [Configure an Internet-facing load balancer](../load-balancer/quickstart-load-balancer-standard-public-portal.md). ## Configure service fabric
service-fabric Quickstart Cluster Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-cluster-bicep.md
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. A Service Fabric *cluster* is a network-connected set of virtual machines into which your microservices are deployed and managed. This article describes how to deploy a Service Fabric test cluster in Azure using Bicep. This five-node Windows cluster is secured with a self-signed certificate and thus only intended for instructional purposes (rather than production workloads). We'll use Azure PowerShell to deploy the Bicep file.
service-fabric Quickstart Cluster Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-cluster-template.md
Last updated 07/11/2022
Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. A Service Fabric *cluster* is a network-connected set of virtual machines into which your microservices are deployed and managed. This article describes how to deploy a Service Fabric test cluster in Azure using an Azure Resource Manager template (ARM template). This five-node Windows cluster is secured with a self-signed certificate and thus only intended for instructional purposes (rather than production workloads). We'll use Azure PowerShell to deploy the template. In addition to Azure PowerShell, you can also use the Azure portal, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-portal.md).
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/release-notes.md
We're excited to announce that the 10.0 release of the Service Fabric runtime ha
|||| | September 09, 2023 | Azure Service Fabric 10.0 Release | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_10.md) | | November 1, 2023 | Azure Service Fabric 10.0 First Refresh Release | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_100CU1.md) |
-| April 1, 2024 | Azure Service Fabric 10.1 Third Refresh Release | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_100CU3.md) |
+| April 1, 2024 | Azure Service Fabric 10.0 Third Refresh Release | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_100CU3.md) |
| June 15, 2024 | Azure Service Fabric 10.0 Fourth Refresh Release | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_100CU4.md) | ## Service Fabric 9.1
service-fabric Service Fabric Powershell Add Application Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-application-certificate.md
This sample script walks through how to create a certificate in Key Vault and then deploy it to one of the virtual machine scale sets your cluster runs on. This scenario does not use Service Fabric directly, but rather depends on Key Vault and on virtual machine scale sets. If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/) and then run `Connect-AzAccount` to create a connection with Azure.
service-fabric Service Fabric Powershell Add Nsg Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-nsg-rule.md
This sample script creates a network security group rule to allow inbound traffic on port 8081. The script gets the network security group, creates a new network security configuration rule, and updates the network security group. Customize the parameters as needed. If needed, install the Azure PowerShell using the instructions found in the [Azure PowerShell guide](/powershell/azure/).
service-fabric Service Fabric Powershell Change Rdp Port Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-port-range.md
Last updated 03/19/2018
This sample script changes the RDP port range values on the cluster node VMs after the cluster has been deployed. Azure PowerShell is used so that the underlying VMs do not cycle. The script gets the `Microsoft.Network/loadBalancers` resource in the cluster's resource group and updates the `inboundNatPools.frontendPortRangeStart` and `inboundNatPools.frontendPortRangeEnd` values. Customize the parameters as needed. If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/).
service-fabric Service Fabric Powershell Change Rdp User And Pw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md
Each [node type](../service-fabric-cluster-nodetypes.md) in a Service Fabric cluster is a virtual machine scale set. This sample script updates the admin username and password for the cluster virtual machines in a specific node type. Add the VMAccessAgent extension to the scale set, because the admin password is not a modifiable scale set property. The username and password changes apply to all nodes in the scale set. Customize the parameters as needed. If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/).
service-fabric Service Fabric Powershell Create Secure Cluster Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-create-secure-cluster-cert.md
This sample script creates a five-node Service Fabric cluster secured with an X.509 certificate. The command creates a self-signed certificate and uploads it to a new key vault. The certificate is also copied to a local directory. Set the *-OS* parameter to choose the version of Windows or Linux that runs on the cluster nodes. Customize the parameters as needed. If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure/) and then run `Connect-AzAccount` to create a connection with Azure.
service-fabric Service Fabric Powershell Open Port In Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md
A Service Fabric application running in Azure sits behind the Azure load balancer. This sample script opens a port in an Azure load balancer so that a Service Fabric application can communicate with external clients. Customize the parameters as needed. If your cluster is in a network security group, also [add an inbound network security group rule](service-fabric-powershell-add-nsg-rule.md) to allow inbound traffic. If needed, install the Service Fabric PowerShell module with the [Service Fabric SDK](../service-fabric-get-started.md).
service-fabric Service Fabric Best Practices Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-infrastructure-as-code.md
In a production scenario, create Azure Service Fabric clusters using Resource Ma
Sample Resource Manager templates are available for Windows and Linux in the [Azure samples on GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). These templates can be used as a starting point for your cluster template. Download `azuredeploy.json` and `azuredeploy.parameters.json` and edit them to meet your custom requirements. To deploy the `azuredeploy.json` and `azuredeploy.parameters.json` templates you downloaded above, use the following Azure CLI commands:
service-fabric Service Fabric Cluster Change Cert Thumbprint To Cn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-change-cert-thumbprint-to-cn.md
The signature of a certificate (commonly known as a thumbprint) is unique. A clu
Converting an Azure Service Fabric cluster's certificate declarations from thumbprint-based to declarations based on the certificate's subject common name (CN) simplifies management considerably. In particular, rolling over a certificate no longer requires a cluster upgrade. This article describes how to convert an existing cluster to CN-based declarations without downtime. ## Move to certificate authority-signed certificates
service-fabric Service Fabric Cluster Config Upgrade Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-config-upgrade-azure.md
This article describes how to customize the various fabric settings for your Ser
> ## Customize cluster settings using Resource Manager templates Azure clusters can be configured through the JSON Resource Manager template. To learn more about the different settings, see [Configuration settings for clusters](service-fabric-cluster-fabric-settings.md). As an example, the steps below show how to add a new setting *MaxDiskQuotaInMB* to the *Diagnostics* section using Azure Resource Explorer.
service-fabric Service Fabric Cluster Creation Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-create-template.md
Cluster security is configured when the cluster is first set up and cannot be ch
Before deploying a production cluster to run production workloads, be sure to first read the [Production readiness checklist](service-fabric-production-readiness-checklist.md). ## Create the Resource Manager template Sample Resource Manager templates are available in the [Azure samples on GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). These templates can be used as a starting point for your cluster template.
service-fabric Service Fabric Cluster Creation Via Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-via-arm.md
The type of security chosen to secure the cluster (i.e.: Windows identity, X509
If you are creating a production cluster to run production workloads, we recommend you first read through the [production readiness checklist](service-fabric-production-readiness-checklist.md). ## Prerequisites In this article, use the Service Fabric RM PowerShell or Azure CLI modules to deploy a cluster:
service-fabric Service Fabric Cluster Programmatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-programmatic-scaling.md
Last updated 07/14/2022
Service Fabric clusters running in Azure are built on top of virtual machine scale sets. [Cluster scaling](./service-fabric-cluster-scale-in-out.md) describes how Service Fabric clusters can be scaled either manually or with auto-scale rules. This article describes how to manage credentials and scale a cluster in or out using the fluent Azure compute SDK, which is a more advanced scenario. For an overview, read [programmatic methods of coordinating Azure scaling operations](service-fabric-cluster-scaling.md#programmatic-scaling). ## Manage credentials One challenge of writing a service to handle scaling is that the service must be able to access virtual machine scale set resources without an interactive login. Accessing the Service Fabric cluster is easy if the scaling service is modifying its own Service Fabric application, but credentials are needed to access the scale set. To sign in, you can use a [service principal](/cli/azure/create-an-azure-service-principal-azure-cli) created with the [Azure CLI](https://github.com/azure/azure-cli).
service-fabric Service Fabric Cluster Rollover Cert Cn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-rollover-cert-cn.md
Last updated 07/14/2022
# Manually roll over a Service Fabric cluster certificate When a Service Fabric cluster certificate is close to expiring, you need to update the certificate. Certificate rollover is simple if the cluster was [set up to use certificates based on common name](service-fabric-cluster-change-cert-thumbprint-to-cn.md) (instead of thumbprint). Get a new certificate from a certificate authority with a new expiration date. Self-signed certificates are not support for production Service Fabric clusters, to include certificates generated during Azure portal Cluster creation workflow. The new certificate must have the same common name as the older certificate. Service Fabric cluster will automatically use the declared certificate with a further into the future expiration date; when more than one validate certificate is installed on the host. A best practice is to use a Resource Manager template to provision Azure Resources. For non-production environment the following script can be used to upload a new certificate to a key vault and then installs the certificate on the virtual machine scale set:
service-fabric Service Fabric Cluster Scale In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-scale-in-out.md
Last updated 07/14/2022
Scaling compute resources to source your application work load requires intentional planning, will nearly always take longer than an hour to complete for a production environment, and does require you to understand your workload and business context; in fact if you have never done this activity before, it's recommended you start by reading and understanding [Service Fabric cluster capacity planning considerations](service-fabric-cluster-capacity.md), before continuing the remainder of this document. This recommendation is to avoid unintended LiveSite issues, and it's also recommended you successfully test the operations you decide to perform against a non-production environment. At any time you can [report production issues or request paid support for Azure](service-fabric-support.md#create-an-azure-support-request). For engineers allocated to perform these operations that possess appropriate context, this article will describe scaling operations, but you must decide and understand which operations are appropriate for your use case; such as what resources to scale (CPU, Storage, Memory), what direction to scale (Vertically or Horizontally), and what operations to perform (Resource Template deployment, Portal, PowerShell/CLI). ## Scale a Service Fabric cluster in or out using auto-scale rules or manually Virtual machine scale sets are an Azure compute resource that you can use to deploy and manage a collection of virtual machines as a set. Every node type that is defined in a Service Fabric cluster is set up as a separate virtual machine scale set. Each node type can then be scaled in or out independently, have different sets of ports open, and can have different capacity metrics. Read more about it in the [Service Fabric node types](service-fabric-cluster-nodetypes.md) document. Since the Service Fabric node types in your cluster are made of virtual machine scale sets at the backend, you need to set up auto-scale rules for each node type/virtual machine scale set.
service-fabric Service Fabric Cluster Security Update Certs Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-security-update-certs-azure.md
Service fabric lets you specify two cluster certificates, a primary and a second
> ## Add a secondary cluster certificate using the portal Secondary cluster certificate cannot be added through the Azure portal; use [Azure Resource Manager](#add-a-secondary-certificate-using-azure-resource-manager).
service-fabric Service Fabric Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-common-questions.md
Last updated 07/14/2022
There are many commonly asked questions about what Service Fabric can do and how it should be used. This document covers many of those common questions and their answers. ## Cluster setup and management
service-fabric Service Fabric Create Cluster Using Cert Cn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-create-cluster-using-cert-cn.md
Last updated 07/14/2022
No two certificates can have the same thumbprint, which makes cluster certificate rollover or management difficult. Multiple certificates, however, can have the same common name or subject. A cluster using certificate common names makes certificate management much simpler. This article describes how to deploy a Service Fabric cluster to use the certificate common name instead of the certificate thumbprint. ## Get a certificate First, get a certificate from a [certificate authority (CA)](https://wikipedia.org/wiki/Certificate_authority). The common name of the certificate should be for the custom domain you own, and bought from a domain registrar. For example, "azureservicefabricbestpractices.com"; those whom are not Microsoft employees can not provision certs for MS domains, so you can not use the DNS names of your LB or Traffic Manager as common names for your certificate, and you will need to provision a [Azure DNS Zone](../dns/dns-delegate-domain-azure-dns.md) if your custom domain to be resolvable in Azure. You will also want to declare your custom domain you own as your cluster's "managementEndpoint" if you want portal to reflect the custom domain alias for your cluster.
service-fabric Service Fabric Diagnostics Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-common-scenarios.md
Last updated 07/14/2022
This article illustrates common scenarios users have encountered in the area of monitoring and diagnostics with Service Fabric. The scenarios presented cover all 3 layers of service fabric: Application, Cluster, and Infrastructure. Each solution uses Application Insights and Azure Monitor logs, Azure monitoring tools, to complete each scenario. The steps in each solution give users an introduction on how to use Application Insights and Azure Monitor logs in the context of Service Fabric. ## Prerequisites and Recommendations
service-fabric Service Fabric Diagnostics Event Aggregation Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-aggregation-wad.md
When you're running an Azure Service Fabric cluster, it's a good idea to collect
One way to upload and collect logs is to use the Windows Azure Diagnostics (WAD) extension, which uploads logs to Azure Storage, and also has the option to send logs to Azure Application Insights or Event Hubs. You can also use an external process to read the events from storage and place them in an analysis platform product, such as [Azure Monitor logs](./service-fabric-diagnostics-oms-setup.md) or another log-parsing solution. ## Prerequisites The following tools are used in this article:
service-fabric Service Fabric Diagnostics Event Analysis Oms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-analysis-oms.md
Last updated 07/14/2022
* How do I know when a node goes down? * How do I know if my application's services have started or stopped? ## Overview of the Log Analytics workspace
service-fabric Service Fabric Diagnostics Oms Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-agent.md
This article covers the steps to add the Log Analytics agent as a virtual machin
> [!NOTE] > This article assumes that you have an Azure Log Analytics workspace already set up. If you do not, head over to [Set up Azure Monitor logs](service-fabric-diagnostics-oms-setup.md) ## Add the agent extension via Azure CLI
service-fabric Service Fabric Diagnostics Oms Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-containers.md
Last updated 07/14/2022
This article covers the steps required to set up the Azure Monitor logs container monitoring solution to view container events. To set up your cluster to collect container events, see this [step-by-step tutorial](service-fabric-tutorial-monitoring-wincontainers.md). ## Set up the container monitoring solution
service-fabric Service Fabric Diagnostics Oms Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-setup.md
Azure Monitor logs is our recommendation to monitor cluster level events. You ca
> [!NOTE] > To set up Azure Monitor logs to monitor your cluster, you need to have diagnostics enabled to view cluster-level or platform-level events. Refer to [how to set up diagnostics in Windows clusters](service-fabric-diagnostics-event-aggregation-wad.md) and [how to set up diagnostics in Linux clusters](service-fabric-diagnostics-oms-syslog.md) for more ## Deploy a Log Analytics workspace by using Azure Marketplace
service-fabric Service Fabric Diagnostics Oms Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-syslog.md
Last updated 07/14/2022
Service Fabric exposes a set of platform events to inform you of important activity in your cluster. The full list of events that are exposed is available [here](service-fabric-diagnostics-event-generation-operational.md). There are variety of ways through which these events can be consumed. In this article, we are going to discuss how to configure Service Fabric to write these events to Syslog. ## Introduction
service-fabric Service Fabric Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-overview.md
Last updated 07/14/2022
This article provides an overview of monitoring and diagnostics for Azure Service Fabric. Monitoring and diagnostics are critical to developing, testing, and deploying workloads in any cloud environment. For example, you can track how your applications are used, the actions taken by the Service Fabric platform, your resource utilization with performance counters, and the overall health of your cluster. You can use this information to diagnose and correct issues, and prevent them from occurring in the future. The next few sections will briefly explain each area of Service Fabric monitoring to consider for production workloads. ## Application monitoring Application monitoring tracks how features and components of your application are being used. You want to monitor your applications to make sure issues that impact users are caught. The responsibility of application monitoring is on the users developing an application and its services since it is unique to the business logic of your application. Monitoring your applications can be useful in the following scenarios:
service-fabric Service Fabric Diagnostics Perf Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-perf-wad.md
This document covers steps required to set up collection of performance counters
> The WAD extension should be deployed on your cluster for these steps to work for you. If it is not set up, head over to [Event aggregation and collection using Windows Azure Diagnostics](service-fabric-diagnostics-event-aggregation-wad.md). ## Collect performance counters via the WadCfg
service-fabric Service Fabric Enable Azure Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-enable-azure-disk-encryption-linux.md
The guide covers the following topics:
## Prerequisites
service-fabric Service Fabric Enable Azure Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-enable-azure-disk-encryption-windows.md
The guide covers the following topics:
* Steps to be followed before enabling disk encryption on Service Fabric cluster nodes in Windows. * Steps to be followed to enable disk encryption on Service Fabric cluster nodes in Windows. ## Prerequisites
service-fabric Service Fabric Get Started Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started-containers.md
Running an existing application in a Windows container on a Service Fabric clust
> This article applies to a Windows development environment. The Service Fabric cluster runtime and the Docker runtime must be running on the same OS. You cannot run Windows containers on a Linux cluster. ## Prerequisites
service-fabric Service Fabric Host App In A Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-host-app-in-a-container.md
In this tutorial, you learn how to:
> * Create an Azure container registry > * Deploy a Service Fabric application to Azure ## Prerequisites
service-fabric Service Fabric Patterns Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-patterns-networking.md
Service Fabric is unique from other networking features in one aspect. The [Azur
If port 19080 is not accessible from the Service Fabric resource provider, a message like *Nodes Not Found* appears in the portal, and your node and application list appears empty. If you want to see your cluster in the Azure portal, your load balancer must expose a public IP address, and your network security group must allow incoming port 19080 traffic. If your setup does not meet these requirements, the Azure portal does not display the status of your cluster. ## Templates
service-fabric Service Fabric Quickstart Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-quickstart-containers.md
In this quickstart you learn how to:
* Deploy the container application to Azure ## Prerequisites
service-fabric Service Fabric Tutorial Create Vnet And Windows Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster.md
In this tutorial series you learn how to:
> * [Delete a cluster](service-fabric-tutorial-delete-cluster.md) ## Prerequisites
service-fabric Service Fabric Tutorial Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-delete-cluster.md
In this tutorial series you learn how to:
> * Delete a cluster ## Prerequisites
service-fabric Service Fabric Tutorial Deploy Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-deploy-api-management.md
Deploying Azure API Management with Service Fabric is an advanced scenario. API
This article shows you how to set up [Azure API Management](../api-management/api-management-key-concepts.md) with Service Fabric to route traffic to a back-end service in Service Fabric. When you're finished, you have deployed API Management to a VNET, configured an API operation to send traffic to back-end stateless services. To learn more about Azure API Management scenarios with Service Fabric, see the [overview](service-fabric-api-management-overview.md) article. ## Availability
service-fabric Service Fabric Tutorial Dotnet App Enable Https Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-dotnet-app-enable-https-endpoint.md
The tutorial series shows you how to:
* [Configure CI/CD by using Azure Pipelines](service-fabric-tutorial-deploy-app-with-cicd-vsts.md) * [Set up monitoring and diagnostics for the application](service-fabric-tutorial-monitoring-aspnet.md) ## Prerequisites
service-fabric Service Fabric Tutorial Java Jenkins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-java-jenkins.md
In this tutorial series you learn how to:
You can set up Jenkins either inside or outside a Service Fabric cluster. The following instructions show how to set it up outside a cluster using a provided Docker image. However, a preconfigured Jenkins build environment can also be used. The following container image comes installed with the Service Fabric plugin and is ready for use with Service Fabric immediately. 1. Pull the Service Fabric Jenkins container image: `docker pull rapatchi/jenkins:v10`. This image comes with Service Fabric Jenkins plugin pre-installed.
service-fabric Service Fabric Tutorial Monitor Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitor-cluster.md
In this tutorial series you learn how to:
> * [Delete a cluster](service-fabric-tutorial-delete-cluster.md) ## Prerequisites
service-fabric Service Fabric Tutorial Monitoring Wincontainers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-monitoring-wincontainers.md
In this tutorial, you learn how to:
> * Use a Log Analytics workspace to view and query logs from your containers and nodes > * Configure the Log Analytics agent to pick up container and node metrics ## Prerequisites
service-fabric Service Fabric Tutorial Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-scale-cluster.md
In this tutorial series you learn how to:
> * [Delete a cluster](service-fabric-tutorial-delete-cluster.md) ## Prerequisites
service-fabric Service Fabric Tutorial Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-upgrade-cluster.md
In this tutorial series you learn how to:
> * [Delete a cluster](service-fabric-tutorial-delete-cluster.md) ## Prerequisites
service-health Alerts Activity Log Service Notifications Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-arm.md
This article shows you how to set up activity log alerts for service health notifications by using an Azure Resource Manager template (ARM template). Service health notifications are stored in the [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). Given the possibly large volume of information stored in the activity log, there is a separate user interface to make it easier to view and set up alerts on service health notifications.
service-health Alerts Activity Log Service Notifications Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-bicep.md
This article shows you how to set up activity log alerts for service health notifications by using a Bicep file. Service health notifications are stored in the [Azure activity log](../azure-monitor/essentials/platform-logs-overview.md). Given the possibly large volume of information stored in the activity log, there is a separate user interface to make it easier to view and set up alerts on service health notifications.
service-health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/overview.md
Together, these experiences provide you with a comprehensive view into the healt
>[!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE2OgX6]
service-health Resource Health Alert Arm Template Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-alert-arm-template-guide.md
This article will show you how to create Resource Health Activity Log Alerts pro
Azure Resource Health keeps you informed about the current and historical health status of your Azure resources. Azure Resource Health alerts can notify you in near real-time when these resources have a change in their health status. Creating Resource Health alerts programmatically allow for users to create and customize alerts in bulk. ## Prerequisites
site-recovery Azure To Azure Autoupdate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-autoupdate.md
Azure Site Recovery uses a monthly release cadence to fix any issues and enhance
As mentioned in [Azure-to-Azure disaster recovery architecture](azure-to-azure-architecture.md), the Mobility service is installed on all Azure virtual machines (VMs) that have replication enabled from one Azure region to another. When you use automatic updates, each new release updates the Mobility service extension. ## How automatic updates work
site-recovery Azure To Azure Exclude Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-exclude-disks.md
This article describes how to exclude disks when you replicate Azure VMs. You might exclude disks to optimize the consumed replication bandwidth or the target-side resources that those disks use. Currently, this capability is available only through Azure PowerShell. ## Prerequisites
site-recovery Azure To Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-powershell.md
You learn how to:
> Not all scenario capabilities available through the portal may be available through Azure PowerShell. Some of the scenario capabilities not currently supported through Azure PowerShell are: > - The ability to specify that all disks in a virtual machine should be replicated without having to explicitly specify each disk of the virtual machine. ## Prerequisites
site-recovery Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/delete-vault.md
This article describes how to delete a Recovery Services vault for Site Recovery. To delete a vault used in Azure Backup, see [Delete a Backup vault in Azure](../backup/backup-azure-delete-vault.md). ## Before you start Before you can delete a vault you must remove registered servers, and items in the vault. What you need to remove depends on the replication scenarios you've deployed.
+> [!NOTE]
+> Before you delete a Backup protection policy from a vault, you must ensure that
+> - the policy doesn't have any associated Backup items.
+> - each associated item is associated with some other policy.
## Delete a vault-Azure VM to Azure
site-recovery Hyper V Azure Powershell Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-powershell-resource-manager.md
ms.tool: azure-powershell
This article describes how to use Windows PowerShell, together with Azure Resource Manager, to replicate Hyper-V VMs to Azure. The example used in this article shows you how to replicate a single VM running on a Hyper-V host, to Azure. ## Azure PowerShell
site-recovery Physical Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-manage-configuration-server.md
You set up an on-premises configuration server when you use the [Azure Site Recovery](site-recovery-overview.md) service for disaster recovery of physical servers to Azure. The configuration server coordinates communications between on-premises machines and Azure, and manages data replication. This article summarizes common tasks for managing the configuration server after it's been deployed. ## Prerequisites
site-recovery Quickstart Create Vault Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-bicep.md
This quickstart describes how to set up a Recovery Services vault using Bicep. The [Azure Site Recovery](site-recovery-overview.md) service contributes to your business continuity and disaster recovery (BCDR) strategy so your business applications stay online during planned and unplanned outages. Site Recovery manages disaster recovery of on-premises machines and Azure virtual machines (VM), including replication, failover, and recovery. ## Prerequisites
site-recovery Quickstart Create Vault Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-template.md
continuity and disaster recovery (BCDR) strategy so your business applications s
planned and unplanned outages. Site Recovery manages disaster recovery of on-premises machines and Azure virtual machines (VM), including replication, failover, and recovery. To protect VMware or physical server, see [Modernized architecture](./physical-server-azure-architecture-modernized.md).
site-recovery Vmware Azure Disaster Recovery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-disaster-recovery-powershell.md
You learn how to:
> - Perform a failover. Configure failover settings, perform a settings for replicating virtual machines. ## Prerequisites
site-recovery Vmware Azure Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-manage-configuration-server.md
Last updated 08/03/2022
You set up an on-premises configuration server when you use [Azure Site Recovery](site-recovery-overview.md) for disaster recovery of VMware VMs and physical servers to Azure. The configuration server coordinates communications between on-premises VMware and Azure and manages data replication. This article summarizes common tasks for managing the configuration server after it's deployed. ## Update Windows license
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
You set up mobility agent on your server when you use Azure Site Recovery for di
>[!TIP] >To download installer for a specific OS/Linux distro, refer to the guidance [here](vmware-physical-mobility-service-overview.md#locate-installer-files). To automatically update from portal, you do not need to download the installer. [ASR automatically fetches the installer from configuration server and updates the agent](#update-mobility-service-from-azure-portal). ## Update mobility service from Azure portal
spatial-anchors Get Started Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-android.md
You'll learn how to:
> * Configure the Spatial Anchors account identifier and account key > * Deploy and run on an Android device ## Prerequisites
spatial-anchors Get Started Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-hololens.md
You'll learn how to:
> * Configure the Spatial Anchors account identifier and account key > * Deploy and run on a HoloLens device ## Prerequisites
spatial-anchors Get Started Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-ios.md
You'll learn how to:
> * Configure the Spatial Anchors account identifier and account key > * Deploy and run on an iOS device ## Prerequisites
spatial-anchors Get Started Unity Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-android.md
You'll learn how to:
> * Export the Android Studio project > * Deploy and run on an Android device ## Prerequisites
spatial-anchors Get Started Unity Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-hololens.md
You'll learn how to:
- Export the HoloLens Visual Studio project. - Deploy the app and run it on a HoloLens device. ## Prerequisites
spatial-anchors Get Started Unity Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-ios.md
You'll learn how to:
> * Export the Xcode project > * Deploy and run on an iOS device ## Prerequisites
spatial-anchors Get Started Xamarin Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-xamarin-android.md
You'll learn how to:
> * Configure the Spatial Anchors account identifier and account key > * Deploy and run on an Android device ## Prerequisites
spatial-anchors Get Started Xamarin Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-xamarin-ios.md
You'll learn how to:
> * Configure the Spatial Anchors account identifier and account key > * Deploy and run on an iOS device ## Prerequisites
spatial-anchors Tutorial Share Anchors Across Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md
In this tutorial, you'll learn how to:
> * Configure the AzureSpatialAnchorsLocalSharedDemo scene within the Unity sample from our quickstarts to take advantage of the Sharing Anchors web app. > * Deploy and run the anchors to one or more devices. [!INCLUDE [Share Anchors Sample Prerequisites](../../../includes/spatial-anchors-share-sample-prereqs.md)]
Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>,
### Create a resource group Next to **Resource Group**, select **New**.
spring-apps Application Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/application-observability.md
Log Analytics and Application Insights are deeply integrated with Azure Spring A
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
[!INCLUDE [application-observability-with-basic-standard-plan](includes/application-observability/application-observability-with-basic-standard-plan.md)]
spring-apps How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-dynatrace-one-agent-monitor.md
Previously updated : 06/07/2022 Last updated : 06/27/2024 ms.devlang: azurecli
spring-apps How To Elastic Apm Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/how-to-elastic-apm-java-agent-monitor.md
Previously updated : 06/07/2022 Last updated : 06/27/2024
spring-apps Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-deploy-apps.md
Previously updated : 11/15/2021 Last updated : 06/27/2024 zone_pivot_groups: programming-languages-spring-apps
spring-apps Quickstart Integrate Azure Database Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-integrate-azure-database-mysql.md
Previously updated : 08/28/2022 Last updated : 06/27/2024
spring-apps Quickstart Logs Metrics Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-logs-metrics-tracing.md
Previously updated : 10/12/2021 Last updated : 06/27/2024 zone_pivot_groups: programming-languages-spring-apps
spring-apps Quickstart Provision Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-provision-service-instance.md
Previously updated : 7/28/2022 Last updated : 06/27/2024 zone_pivot_groups: programming-languages-spring-apps
spring-apps Quickstart Setup Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/basic-standard/quickstart-setup-config-server.md
Previously updated : 7/19/2022 Last updated : 06/27/2024 zone_pivot_groups: programming-languages-spring-apps
spring-apps Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/breaking-changes.md
Previously updated : 05/25/2022 Last updated : 06/27/2024
spring-apps Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-metrics.md
description: Learn how to review metrics in Azure Spring Apps
Previously updated : 09/08/2020 Last updated : 06/27/2024
spring-apps Concept Outbound Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-outbound-type.md
Previously updated : 10/20/2022 Last updated : 06/27/2024
spring-apps Concept Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-security-controls.md
Previously updated : 04/23/2020 Last updated : 06/27/2024
spring-apps Concept Understand App And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concept-understand-app-and-deployment.md
Previously updated : 07/23/2020 Last updated : 06/27/2024
spring-apps Concepts Blue Green Deployment Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concepts-blue-green-deployment-strategies.md
Previously updated : 11/12/2021 Last updated : 06/27/2024
spring-apps Concepts For Java Memory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/concepts-for-java-memory-management.md
Previously updated : 07/15/2022 Last updated : 06/27/2024
spring-apps Connect Managed Identity To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/connect-managed-identity-to-azure-sql.md
Previously updated : 09/26/2022 Last updated : 06/27/2024
Configure your app deployed to Azure Spring Apps to connect to an Azure SQL Data
--app $APP_NAME \ --query '[0].name' \ --output tsv)
-
+ az spring connection list-configuration \ --resource-group $SPRING_APP_RESOURCE_GROUP \ --service $SPRING_APP_SERVICE_NAME \ --app $APP_NAME \
- --connection $CONNECTION_NAME
+ --connection $CONNECTION_NAME
```
spring-apps Diagnostic Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/diagnostic-services.md
description: Learn how to analyze diagnostics data in Azure Spring Apps
Previously updated : 01/06/2020 Last updated : 06/27/2024
To review log entries that are generated by a specific host, run the following q
```sql AppPlatformIngressLogs
-| where TimeGenerated > ago(1h) and Host == "ingress-asc.test.azuremicroservices.io"
+| where TimeGenerated > ago(1h) and Host == "ingress-asc.test.azuremicroservices.io"
| project TimeGenerated, RemoteIP, Host, Request, Status, BodyBytesSent, RequestTime, ReqId, RequestHeaders | sort by TimeGenerated ```
-Use this query to find response `Status`, `RequestTime`, and other properties of this specific host's ingress logs.
+Use this query to find response `Status`, `RequestTime`, and other properties of this specific host's ingress logs.
### Show ingress log entries for a specific requestId
To review log entries for a specific `requestId` value *\<request_ID>*, run the
```sql AppPlatformIngressLogs
-| where TimeGenerated > ago(1h) and ReqId == "<request_ID>"
+| where TimeGenerated > ago(1h) and ReqId == "<request_ID>"
| project TimeGenerated, RemoteIP, Host, Request, Status, BodyBytesSent, RequestTime, ReqId, RequestHeaders | sort by TimeGenerated ```
spring-apps Expose Apps Gateway End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/expose-apps-gateway-end-to-end-tls.md
Previously updated : 02/28/2022 Last updated : 06/27/2024 ms.devlang: java # ms.devlang: java, azurecli
spring-apps Expose Apps Gateway Tls Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/expose-apps-gateway-tls-termination.md
Previously updated : 11/09/2021 Last updated : 06/27/2024
spring-apps Github Actions Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/github-actions-key-vault.md
Previously updated : 09/08/2020 Last updated : 06/27/2024
spring-apps How To Access App From Internet Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-access-app-from-internet-virtual-network.md
Previously updated : 08/09/2022 Last updated : 06/27/2024 ms.devlang: azurecli
az spring app update \
## Use a public URL to access your application from both inside and outside the virtual network
-You can use a public URL to access your application both inside and outside the virtual network. Follow the steps in [Access your application in a private network](./access-app-virtual-network.md) to bind the domain `.private.azuremicroservices.io` to the service runtime Subnet private IP address in your private DNS zone while keeping the **Assign Endpoint** in a disable state. You can then access the app using the **public URL** from both inside and outside the virtual network.
+You can use a public URL to access your application both inside and outside the virtual network. Follow the steps in [Access your application in a private network](./access-app-virtual-network.md) to bind the domain `.private.azuremicroservices.io` to the service runtime Subnet private IP address in your private DNS zone while keeping the **Assign Endpoint** in a disable state. You can then access the app using the **public URL** from both inside and outside the virtual network.
## Secure traffic to the public endpoint
spring-apps How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-application-insights.md
Previously updated : 06/20/2022 Last updated : 06/27/2024 zone_pivot_groups: spring-apps-tier-selection
spring-apps How To Bind Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-bind-mysql.md
description: Learn how to connect an Azure Database for MySQL instance to your a
Previously updated : 11/09/2022 Last updated : 06/27/2024
spring-apps How To Bind Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-bind-redis.md
description: Learn how to connect Azure Cache for Redis to your application in A
Previously updated : 10/31/2019 Last updated : 06/27/2024
spring-apps How To Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-cicd.md
description: Describes how to use the Azure Spring Apps task for Azure Pipelines
Previously updated : 09/13/2021 Last updated : 06/27/2024 zone_pivot_groups: programming-languages-spring-apps
zone_pivot_groups: programming-languages-spring-apps
This article shows you how to use the [Azure Spring Apps task for Azure Pipelines](/azure/devops/pipelines/tasks/deploy/azure-spring-cloud) to deploy applications.
-Continuous integration and continuous delivery tools let you quickly deploy updates to existing applications with minimal effort and risk. Azure DevOps helps you organize and control these key jobs.
+Continuous integration and continuous delivery tools let you quickly deploy updates to existing applications with minimal effort and risk. Azure DevOps helps you organize and control these key jobs.
The following video describes end-to-end automation using tools of your choice, including Azure Pipelines.
steps:
First, use the following steps to set up an existing Azure Spring Apps instance for use with Azure DevOps.
-1. Go to your Azure Spring Apps instance, then create a new app.
+1. Go to your Azure Spring Apps instance, then create a new app.
1. Go to the Azure DevOps portal, then create a new project under your chosen organization. If you don't have an Azure DevOps organization, you can create one for free. 1. Select **Repos**, then import the [Spring Boot demo code](https://github.com/spring-guides/gs-spring-boot) to the repository.
To deploy using a pipeline, follow these steps:
1. Disable **Use Staging Deployment**. 1. Set **Package or folder** to *complete/target/spring-boot-complete-0.0.1-SNAPSHOT.jar*. 1. Select **Add** to add this task to your pipeline.
-
+ Your pipeline settings should match the following image. :::image type="content" source="media/how-to-cicd/pipeline-task-setting.jpg" alt-text="Screenshot of Azure DevOps that shows the New pipeline settings." lightbox="media/how-to-cicd/pipeline-task-setting.jpg":::
spring-apps How To Configure Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-enterprise-spring-cloud-gateway.md
The following list shows the supported add-on configurations for the add-on key
"PodOverrides": { "Containers": [ {
- {
- "Name": "gateway",
- "Lifecycle": {
- "PreStop": {
- "Exec": {
- "Command": [
- "/bin/sh",
- "-c",
- "sleep 20"
- ]
- }
+ "Name": "gateway",
+ "Lifecycle": {
+ "PreStop": {
+ "Exec": {
+ "Command": [
+ "/bin/sh",
+ "-c",
+ "sleep 20"
+ ]
} } }
spring-apps How To Configure Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-ingress.md
Previously updated : 09/29/2022 Last updated : 06/27/2024
spring-apps How To Configure Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-configure-palo-alto.md
Previously updated : 09/17/2021 Last updated : 06/27/2024
spring-apps How To Connect To App Instance For Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-connect-to-app-instance-for-troubleshooting.md
Previously updated : 12/06/2022 Last updated : 06/27/2024
If your app contains only one instance, use the following command to connect to
az spring app connect \ --service <your-service-instance> \ --resource-group <your-resource-group> \
- --name <app-name>
+ --name <app-name>
``` Otherwise, use the following command to specify the instance:
spring-apps How To Create User Defined Route Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-create-user-defined-route-instance.md
Previously updated : 01/17/2023 Last updated : 06/27/2024
az network vnet subnet update \
--name $ASA_APP_SUBNET_NAME \ --route-table $APP_ROUTE_TABLE_NAME
-az network vnet subnet update
+az network vnet subnet update
--resource-group $RG \ --vnet-name $VNET_NAME \ --name $ASA_SERVICE_RUNTIME_SUBNET_NAME \
export APP_ROUTE_TABLE_RESOURCE_ID=$(az network route-table show \
--resource-group $RG \ --query "id" \ --output tsv)
-
+ az role assignment create \ --role "Owner" \ --scope ${APP_ROUTE_TABLE_RESOURCE_ID} \ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2
-
+ export SERVICE_RUNTIME_ROUTE_TABLE_RESOURCE_ID=$(az network route-table show \ --name $SERVICE_RUNTIME_ROUTE_TABLE_NAME \ --resource-group $RG \ --query "id" \ --output tsv)
-
+ az role assignment create \ --role "Owner" \ --scope ${SERVICE_RUNTIME_ROUTE_TABLE_RESOURCE_ID} \
spring-apps How To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-custom-persistent-storage.md
description: Learn how to bring your own storage as persistent storages in Azure
Previously updated : 2/18/2022 Last updated : 06/27/2024
Use the following steps to enable your own storage with the Azure CLI.
"uid=0", "gid=0" ],
- "readOnly": false
+ "readOnly": false
} }, {
spring-apps How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-in-azure-virtual-network.md
Previously updated : 07/21/2020 Last updated : 06/27/2024
If you already have a virtual network to host an Azure Spring Apps instance, ski
--resource-group $RESOURCE_GROUP \ --vnet-name $VIRTUAL_NETWORK_NAME \ --address-prefixes 10.1.0.0/24 \
- --name service-runtime-subnet
+ --name service-runtime-subnet
az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VIRTUAL_NETWORK_NAME \ --address-prefixes 10.1.1.0/24 \
- --name apps-subnet
+ --name apps-subnet
```
spring-apps How To Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-powershell.md
The requirements for completing the steps in this article depend on your Azure s
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. > [!IMPORTANT] > While the **Az.SpringCloud** PowerShell module is in preview, you must install it by using
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-with-custom-container-image.md
Previously updated : 4/28/2022 Last updated : 06/27/2024 # Deploy an application with a custom container image
spring-apps How To Dump Jvm Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-dump-jvm-options.md
Previously updated : 01/21/2022 Last updated : 06/27/2024
To ensure that you can access your files, be sure that the target path of your g
"uid=0", "gid=0" ],
- "readOnly": false
+ "readOnly": false
} }, {
spring-apps How To Enable Ingress To App Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enable-ingress-to-app-tls.md
Previously updated : 04/12/2022 Last updated : 06/27/2024 # Enable ingress-to-app TLS for an application
spring-apps How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enable-system-assigned-managed-identity.md
Previously updated : 04/15/2022 Last updated : 06/27/2024 zone_pivot_groups: spring-apps-tier-selection
spring-apps How To Enterprise Service Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-service-registry.md
Previously updated : 06/17/2022 Last updated : 06/27/2024
You can also set up the application bindings from the Azure portal, as shown in
You can now choose to bind your application to the Service Registry directly when creating a new app by using the following commands: ```azurecli
-az spring app create \
- --resource-group <resource-group> \
- --service <service-name> \
- --name <app-name> \
+az spring app create \
+ --resource-group <resource-group> \
+ --service <service-name> \
+ --name <app-name> \
--bind-service-registry ```
spring-apps How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-github-actions.md
Previously updated : 09/08/2020 Last updated : 06/27/2024 zone_pivot_groups: programming-languages-spring-apps
To provision your Azure Spring Apps service instance, run the following commands
az extension add --name spring az group create \ --name <resource-group-name> \
- --location eastus
+ --location eastus
az spring create \ --resource-group <resource-group-name> \
- --name <service-instance-name>
+ --name <service-instance-name>
az spring config-server git set \ --name <service-instance-name> \ --uri https://github.com/Azure-Samples/azure-spring-apps-samples \
jobs:
steps: - name: Checkout GitHub Action uses: actions/checkout@v2
-
+ - name: Set up Java 11 uses: actions/setup-java@v3 with:
spring-apps How To Integrate Azure Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-integrate-azure-load-balancers.md
Previously updated : 04/20/2020 Last updated : 06/27/2024
spring-apps How To Intellij Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-intellij-deploy-apps.md
Previously updated : 06/24/2022 Last updated : 06/27/2024
spring-apps How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-manage-user-assigned-managed-identities.md
Previously updated : 03/31/2022 Last updated : 06/27/2024 zone_pivot_groups: spring-apps-tier-selection
spring-apps How To Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-move-across-regions.md
Previously updated : 01/27/2022 Last updated : 06/27/2024
spring-apps How To Outbound Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-outbound-public-ip.md
Previously updated : 09/17/2020 Last updated : 06/27/2024
spring-apps How To Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-permissions.md
Previously updated : 09/04/2020 Last updated : 06/27/2024
This procedure defines a role that has permissions to deploy, test, and restart
* **Read : Read Microsoft Azure Spring Apps Build Services** * **Other : Get an Upload URL in Azure Spring Apps**
-
+ (For Enterprise plan only) Under **Microsoft.AppPlatform/Spring/buildServices/agentPools**, select: * **Read : Read Microsoft Azure Spring Apps Agent Pools**
spring-apps How To Scale Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-scale-manual.md
Previously updated : 10/06/2019 Last updated : 06/27/2024
spring-apps How To Setup Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-setup-autoscale.md
Previously updated : 11/03/2021 Last updated : 06/27/2024
spring-apps How To Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-staging-environment.md
description: Learn how to use blue-green deployment with Azure Spring Apps
Previously updated : 01/14/2021 Last updated : 06/27/2024
spring-apps How To Start Stop Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-start-stop-service.md
Previously updated : 11/04/2021 Last updated : 06/27/2024
spring-apps How To Use Application Live View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-application-live-view.md
Previously updated : 12/01/2022 Last updated : 06/27/2024
spring-apps How To Use Dev Tool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-use-dev-tool-portal.md
Previously updated : 11/28/2022 Last updated : 06/27/2024
spring-apps Monitor App Lifecycle Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/monitor-app-lifecycle-events.md
Previously updated : 08/19/2021 Last updated : 06/27/2024
Azure Spring Apps provides built-in tools to monitor the status and health of yo
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- A deployed Azure Spring Apps service instance and at least one application already created in your service instance. For more information, see [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](quickstart.md).
+- A deployed Azure Spring Apps service instance and at least one application already created in your service instance. For more information, see [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](quickstart.md).
## Monitor app lifecycle events triggered by users in Azure Activity logs
When platform maintenance happens, your Azure Spring Apps instance shows a statu
You can set up alerts for app lifecycle events. Service health notifications are also stored in the Azure activity log. The activity log stores a large volume of information, so there's a separate user interface to make it easier to view and set up alerts on service health notifications.
-The following list describes the key steps needed to set up an alert:
+The following list describes the key steps needed to set up an alert:
1. Set up an action group with the actions to take when an alert is triggered. Example action types include sending a voice call, SMS, email; or triggering various types of automated actions. Various alerts may use the same action group or different action groups depending on the user's requirements. 2. Set up alert rules. The alerts use action groups to notify users that an alert for some specific app lifecycle event has been triggered.
spring-apps Monitor Apps By Application Live View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/monitor-apps-by-application-live-view.md
Previously updated : 12/01/2022 Last updated : 06/27/2024
spring-apps Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-apps-enterprise.md
Previously updated : 05/31/2022 Last updated : 06/27/2024
spring-apps Quickstart Deploy Event Driven App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-event-driven-app.md
This article provides the following options for deploying to Azure Spring Apps:
### [Azure portal](#tab/Azure-portal) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. ### [Azure Developer CLI](#tab/Azure-Developer-CLI) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - [Azure Developer CLI (AZD)](/azure/developer/azure-developer-cli/install-azd), version 1.2.0 or higher.
This article provides the following options for deploying to Azure Spring Apps:
### [Azure portal](#tab/Azure-portal-ent) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). ### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. ### [Azure CLI](#tab/Azure-CLI) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
spring-apps Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-infrastructure-vnet.md
The Enterprise deployment plan includes the following Tanzu components:
* Application Accelerator * Application Live View ## Prerequisites
spring-apps Quickstart Deploy Microservice Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-deploy-microservice-apps.md
This article provides the following options for deploying to Azure Spring Apps:
### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
spring-apps Quickstart Fitness Store Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-fitness-store-azure-openai.md
Last updated 11/02/2023 + # Quickstart: Integrate Azure OpenAI
spring-apps Quickstart Integrate Azure Database And Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-integrate-azure-database-and-redis-enterprise.md
The following steps describe how to provision an Azure Cache for Redis instance
The following instructions describe how to provision an Azure Cache for Redis and an Azure Database for PostgreSQL Flexible Server by using an Azure Resource Manager template (ARM template). You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/HEAD/azure-spring-apps-enterprise/resources/json/deploy/azuredeploy.json).
spring-apps Quickstart Key Vault Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-key-vault-enterprise.md
Previously updated : 05/31/2022 Last updated : 06/27/2024
spring-apps Quickstart Sample App Acme Fitness Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-sample-app-acme-fitness-store-introduction.md
Previously updated : 05/31/2022 Last updated : 06/27/2024
spring-apps Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart.md
The application code used in this tutorial is a simple app. When you complete th
### [Azure portal](#tab/Azure-portal) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. ### [Azure Developer CLI](#tab/Azure-Developer-CLI) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - [Azure Developer CLI (AZD)](/azure/developer/azure-developer-cli/install-azd), version 1.2.0 or higher.
The application code used in this tutorial is a simple app. When you complete th
### [Azure portal](#tab/Azure-portal-ent) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). ### [Azure portal + Maven plugin](#tab/Azure-portal-maven-plugin-ent) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. ### [Azure CLI](#tab/Azure-CLI) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.
The application code used in this tutorial is a simple app. When you complete th
### [IntelliJ](#tab/IntelliJ) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - [IntelliJ IDEA](https://www.jetbrains.com/idea/).
The application code used in this tutorial is a simple app. When you complete th
### [Visual Studio Code](#tab/visual-studio-code) -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - [Visual Studio Code](https://code.visualstudio.com/).
spring-apps Secure Communications End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/secure-communications-end-to-end.md
description: Describes how to secure communications end-to-end or terminate tran
Previously updated : 08/15/2022 Last updated : 06/27/2024
You need the following three configuration steps to secure communications using
azure: keyvault: uri: ${KEY_VAULT_URI}
-
+ server: ssl: key-alias: ${SERVER_SSL_CERTIFICATE_NAME}
spring-apps Tools To Troubleshoot Memory Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tools-to-troubleshoot-memory-issues.md
Previously updated : 07/15/2022 Last updated : 06/27/2024
spring-apps Troubleshooting Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/troubleshooting-vnet.md
description: Troubleshooting guide for Azure Spring Apps virtual network.
Previously updated : 09/19/2020 Last updated : 06/27/2024
For more information, see [Access your application in a private network](./acces
## I can't access my application's public endpoint from public network
-Azure Spring Apps supports exposing applications to the internet by using public endpoints. For more information, see [Expose applications on Azure Spring Apps to the internet from a public network](how-to-access-app-from-internet-virtual-network.md).
+Azure Spring Apps supports exposing applications to the internet by using public endpoints. For more information, see [Expose applications on Azure Spring Apps to the internet from a public network](how-to-access-app-from-internet-virtual-network.md).
If you're using a user defined route feature, some features aren't supported because of asymmetric routing. For unsupported features, see the following list:
spring-apps Tutorial Circuit Breaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-circuit-breaker.md
Previously updated : 04/06/2020 Last updated : 06/27/2024
static-web-apps Preview Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/preview-environments.md
Previously updated : 03/29/2022 Last updated : 06/26/2024
Beyond PR-driven temporary environments, you can enable preview environments tha
<DEFAULT_HOST_NAME>-<BRANCH_OR_ENVIRONMENT_NAME>.<LOCATION>.azurestaticapps.net ```
+Custom domains do not work with preview environments.
+ ## Deployment types The following deployment types are available in Azure Static Web Apps.
static-web-apps Publish Gatsby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-gatsby.md
In this tutorial, you learn how to:
> - Setup an Azure Static Web Apps site > - Deploy the Gatsby app to Azure ## Prerequisites
static-web-apps Publish Hugo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-hugo.md
In this tutorial, you learn how to:
> - Setup an Azure Static Web Apps > - Deploy the Hugo app to Azure ## Prerequisites
static-web-apps Publish Jekyll https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-jekyll.md
In this tutorial, you learn how to:
> - Setup an Azure Static Web Apps resource > - Deploy the Jekyll app to Azure ## Prerequisites
storage-mover Storage Mover Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/storage-mover-create.md
az storage-mover create --Name $storageMoverName \
### Prepare your Azure PowerShell environment The `New-AzStorageMover` cmdlet is used to create new storage mover resource in a resource group. If you haven't yet installed the `Az.StorageMover` module:
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Object replication requires that blob versioning is enabled on both the source a
If your storage account has object replication policies in effect, you cannot disable blob versioning for that account. You must delete any object replication policies on the account before disabling blob versioning.
+> [!NOTE]
+> Only blobs are copied to the destination. A blob's version ID is not copied. The blob that is placed at the destination location is assigned a new version ID.
+ ### Deleting a blob in the source account When a blob in the source account is deleted, the current version of the blob becomes a previous version, and there's no longer a current version. All existing previous versions of the blob are preserved. This state is replicated to the destination account. For more information about how to delete operations affect blob versions, see [Versioning on delete operations](versioning-overview.md#versioning-on-delete-operations).
storage Storage Blob Event Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-quickstart-powershell.md
When you're finished, you see that the event data has been sent to the web app.
## Setup This article requires that you're running the latest version of Azure PowerShell. If you need to install or upgrade, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
storage Storage Blob Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-quickstart.md
When you complete the steps described in this article, you see that the event da
![Screenshot of the Azure Event Grid Viewer that shows event data that has been sent to the web app.](./media/storage-blob-event-quickstart/view-results.png) [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
storage Storage Blob Go Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-go-get-started.md
Previously updated : 05/22/2024 Last updated : 06/26/2024 ms.devlang: golang
The following guides show you how to work with data resources and perform specif
| [Download blobs](storage-blob-download-go.md) | Download blobs by using strings, streams, and file paths. | | [List blobs](storage-blobs-list-go.md) | List blobs in different ways. | | [Delete and restore blobs](storage-blob-delete-go.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. |
+| [Find blobs using tags](storage-blob-tags-go.md) | Set and retrieve tags, and use tags to find blobs. |
| [Manage properties and metadata (blobs)](storage-blob-properties-metadata-go.md) | Manage container properties and metadata. | [!INCLUDE [storage-dev-guide-code-samples-note-go](../../../includes/storage-dev-guides/storage-dev-guide-code-samples-note-go.md)]
storage Storage Blob Scalable App Create Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-scalable-app-create-vm.md
In part one of the series, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this tutorial requires the Azure PowerShell module Az version 0.7 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
storage Storage Blob Tags Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-go.md
+
+ Title: Use blob index tags to manage and find data with Go
+
+description: Learn how to categorize, manage, and query for blob objects by using the Go client module.
++++ Last updated : 06/26/2024++
+ms.devlang: golang
+++
+# Use blob index tags to manage and find data with Go
++
+This article shows how to use blob index tags to manage and find data using the [Azure Storage client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme).
++
+## Set up your environment
++
+#### Authorization
+
+The authorization mechanism must have the necessary permissions to work with blob index tags. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Owner** or higher. To learn more, see the authorization guidance for [Get Blob Tags](/rest/api/storageservices/get-blob-tags#authorization), [Set Blob Tags](/rest/api/storageservices/set-blob-tags#authorization), or [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags#authorization).
++
+## Set tags
++
+You can set tags by using the following method:
+
+- [SetTags](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob#Client.SetTags)
+
+The tags specified in this method replace any existing tags. If existing values must be preserved, they must be downloaded and included in the call to this method. The following example shows how to set tags:
++
+You can remove all tags by calling `SetTags` with no tags, as shown in the following example:
++
+## Get tags
++
+You can get tags by using the following method:
+
+- [GetTags](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob#Client.GetTags)
+
+The following example shows how to retrieve and iterate over the blob's tags:
++
+## Filter and find data with blob index tags
++
+> [!NOTE]
+> You can't use index tags to retrieve previous versions. Tags for previous versions aren't passed to the blob index engine. For more information, see [Conditions and known issues](storage-manage-find-blobs.md#conditions-and-known-issues).
+
+You can filter blob data based on index tags by using the following method:
+
+- [FilterBlobs](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob@v1.3.2/container#Client.FilterBlobs)
+
+The following example finds and lists all blobs tagged as an image:
+++
+## Resources
+
+To learn more about how to use index tags to manage and find data using the Azure Blob Storage client library for Go, see the following resources.
+
+### Code samples
+
+- View [code samples](https://github.com/Azure-Samples/blob-storage-devguide-go/blob/main/cmd/blob-index-tags/blob_index_tags.go) from this article (GitHub)
+
+### REST API operations
+
+The Azure SDK for Go contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Go paradigms. The client library methods for managing and using blob index tags use the following REST API operations:
+
+- [Get Blob Tags](/rest/api/storageservices/get-blob-tags) (REST API)
+- [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API)
+- [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API)
++
+### See also
+
+- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)
+- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
storage Storage Custom Domain Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-custom-domain-name.md
After the custom domain has been removed successfully, you will see a portal not
#### [PowerShell](#tab/azure-powershell) To remove a custom domain registration, use the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) PowerShell cmdlet, and then specify an empty string (`""`) for the `-CustomDomainName` argument value.
storage Storage Quickstart Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-powershell.md
To access Azure Storage, you'll need an Azure subscription. If you don't already
You will also need the Storage Blob Data Contributor role to read, write, and delete Azure Storage containers and blobs. This quickstart requires the Azure PowerShell module Az version 0.7 or later. Run `Get-InstalledModule -Name Az -AllVersions | select Name,Version` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
storage Storage Retry Policy Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy-go.md
+
+ Title: Implement a retry policy using the Azure Storage client module for Go
+
+description: Learn about retry policies and how to implement them for Blob Storage. This article helps you set up a retry policy for Blob Storage requests using the Azure Storage client module for Go.
++++ Last updated : 06/26/2024+++
+# Implement a retry policy with Go
+
+Any application that runs in the cloud or communicates with remote services and resources must be able to handle transient faults. It's common for these applications to experience faults due to a momentary loss of network connectivity, a request timeout when a service or resource is busy, or other factors. Developers should build applications to handle transient faults transparently to improve stability and resiliency.
+
+In this article, you learn how to use the Azure Storage client module for Go to configure a retry policy for an application that connects to Azure Blob Storage. Retry policies define how the application handles failed requests, and should always be tuned to match the business requirements of the application and the nature of the failure.
+
+## Configure retry options
+
+Retry policies for Blob Storage are configured programmatically, offering control over how retry options are applied to various service requests and scenarios. For example, a web app issuing requests based on user interaction might implement a policy with fewer retries and shorter delays to increase responsiveness and notify the user when an error occurs. Alternatively, an app or component running batch requests in the background might increase the number of retries and use an exponential backoff strategy to allow the request time to complete successfully.
+
+The following table lists the fields available to configure in a [RetryOptions](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy#RetryOptions) instance, along with the type, a brief description, and the default value if you make no changes. You should be proactive in tuning the values of these properties to meet the needs of your app.
+
+| Property | Type | Description | Default value |
+| | | | |
+| `MaxRetries` | `int32` | Optional. Specifies the maximum number of attempts a failed operation is retried before producing an error. A value less than zero means one try and no retries. | 3 |
+| `TryTimeout` | `time.Duration` | Optional. Indicates the maximum time allowed for any single try of an HTTP request. Specify a value greater than zero to enable. Note: Setting this field to a small value might cause premature HTTP request timeouts. | Disabled by default. |
+| `RetryDelay` | `time.Duration` | Optional. Specifies the initial amount of delay to use before retrying an operation. The value is used only if the HTTP response doesn't contain a Retry-After header. The delay increases exponentially with each retry up to the maximum specified by MaxRetryDelay. A value less than zero means no delay between retries. | 4 seconds |
+| `MaxRetryDelay` | `time.Duration` | Optional. Specifies the maximum delay allowed before retrying an operation. Typically, the value is greater than or equal to the value specified in `RetryDelay`. A value less than zero means there's no maximum. | 60 seconds |
+| `StatusCodes` | []int | Optional. Specifies the HTTP status codes that indicate the operation should be retried. Specifying values replaces the default values. Specifying an empty slice disables retries for HTTP status codes. | 408 - http.StatusRequestTimeout</br>429 - http.StatusTooManyRequests</br>500 - http.StatusInternalServerError</br>502 - http.StatusBadGateway</br>503 - http.StatusServiceUnavailable</br>504 - http.StatusGatewayTimeout |
+| `ShouldRetry` | `func(*http.Response, error) bool` | Optional. evaluates if the retry policy should retry the request. When specified, the function overrides comparison against the list of HTTP status codes and error checking within the retry policy. `Context` and `NonRetriable` errors remain evaluated before calling `ShouldRetry`. The `*http.Response` and `error` parameters are mutually exclusive, that is, if one is `nil`, the other isn't `nil`. A return value of true means the retry policy should retry. | |
+
+To work with the code example in this article, add the following `import` paths to your code:
+
+```go
+import (
+ "context"
+ "time"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore"
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob"
+)
+```
+
+In the following code example, we configure the retry options in an instance of [RetryOptions](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy#RetryOptions), include it in a [ClientOptions](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy#ClientOptions) instance, and create a new client object:
+
+```go
+options := azblob.ClientOptions{
+ ClientOptions: azcore.ClientOptions{
+ Retry: policy.RetryOptions{
+ MaxRetries: 10,
+ TryTimeout: time.Minute * 15,
+ RetryDelay: time.Second * 1,
+ MaxRetryDelay: time.Second * 60,
+ StatusCodes: []int{408, 429, 500, 502, 503, 504},
+ },
+ },
+}
+
+credential, err := azidentity.NewDefaultAzureCredential(nil)
+handleError(err)
+
+client, err := azblob.NewClient(accountURL, credential, &options)
+handleError(err)
+```
+
+In this example, each service request issued from `client` uses the retry options as defined in the `RetryOptions` struct. This policy applies to client requests. You can configure various retry strategies for service clients based on the needs of your app.
+
+## Related content
+
+- For architectural guidance and general best practices for retry policies, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
+- For guidance on implementing a retry pattern for transient failures, see [Retry pattern](/azure/architecture/patterns/retry).
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
To learn how to configure customer-managed keys for a new storage account, see [
> [!NOTE] > Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration of customer-managed keys. Any action that is supported for Azure Key Vault is also supported for Azure Key Vault Managed HSM. ## Configure customer-managed keys for an existing account
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
To learn how to configure customer-managed keys for an existing storage account,
> [!NOTE] > Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration of customer-managed keys. Any action that is supported for Azure Key Vault is also supported for Azure Key Vault Managed HSM. ## Create a new storage account encrypted with a key from a different tenant
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
To upgrade a general-purpose v1 or Blob storage account to a general-purpose v2
# [PowerShell](#tab/azure-powershell) To upgrade a general-purpose v1 account to a general-purpose v2 account using PowerShell, first update PowerShell to use the latest version of the **Az.Storage** module. See [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell) for information about installing PowerShell.
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
This article shows how to initiate an account failover for your storage account
> [!WARNING] > An account failover typically results in some data loss. To understand the implications of an account failover and to prepare for data loss, review [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies). ## Prerequisites
Before you can perform an account failover on your storage account, make sure th
You can initiate an account failover from the Azure portal, PowerShell, or the Azure CLI. ## [Portal](#tab/azure-portal)
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Turning on firewall rules for your storage account blocks incoming requests for
You can grant access to Azure services that operate from within a virtual network by allowing traffic from the subnet that hosts the service instance. You can also enable a limited number of scenarios through the exceptions mechanism that this article describes. To access data from the storage account through the Azure portal, you need to be on a machine within the trusted boundary (either IP or virtual network) that you set up. ## Scenarios
By default, storage accounts accept connections from clients on any network. You
You must set the default rule to **deny**, or network rules have no effect. However, changing this setting can affect your application's ability to connect to Azure Storage. Be sure to grant access to any allowed networks or set up access through a private endpoint before you change this setting. ### [Portal](#tab/azure-portal)
storage Storage Powershell Independent Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-powershell-independent-clouds.md
Most people use Azure Public Cloud for their global Azure deployment. There are
- [Azure Government Cloud](https://azure.microsoft.com/features/gov/) - [Azure German Cloud](../../germany/germany-welcome.md) ## Using an independent cloud
storage Storage Require Secure Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-require-secure-transfer.md
To require secure transfer programmatically, set the *enableHttpsTrafficOnly* pr
## Require secure transfer with PowerShell This sample requires the Azure PowerShell module Az version 0.7 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell).
EnableHttpsTrafficOnly : True
[!INCLUDE [sample-cli-install](../../../includes/sample-cli-install.md)] Use the following command to check the setting:
storage Storage Use Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-emulator.md
For more information on connection strings, see [Configure Azure Storage connect
### Authorize with a shared access signature Some Azure storage client libraries, such as the Xamarin library, only support authentication with a shared access signature (SAS) token. You can create the SAS token using [Storage Explorer](https://storageexplorer.com/) or another application that supports Shared Key authentication.
storage Use Container Storage With Local Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md
description: Configure Azure Container Storage for use with Ephemeral Disk using
Previously updated : 06/20/2024 Last updated : 06/27/2024
Follow these steps to create a storage pool using local NVMe.
apiVersion: containerstorage.azure.com/v1 kind: StoragePool metadata:
- name: ephemeraldisk
+ name: ephemeraldisk-nvme
namespace: acstor spec: poolType:
Follow these steps to create a storage pool using local NVMe.
When storage pool creation is complete, you'll see a message like: ```output
- storagepool.containerstorage.azure.com/ephemeraldisk created
+ storagepool.containerstorage.azure.com/ephemeraldisk-nvme created
```
- You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **ephemeraldisk**.
+ You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **ephemeraldisk-nvme**.
```azurecli-interactive kubectl describe sp <storage-pool-name> -n acstor
Run `kubectl get sc` to display the available storage classes. You should see a
```output $ kubectl get sc | grep "^acstor-" acstor-azuredisk-internal disk.csi.azure.com Retain WaitForFirstConsumer true 65m
-acstor-ephemeraldisk containerstorage.csi.azure.com Delete WaitForFirstConsumer true 2m27s
+acstor-ephemeraldisk-nvme containerstorage.csi.azure.com Delete WaitForFirstConsumer true 2m27s
``` > [!IMPORTANT]
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
type: my-ephemeral-volume spec: accessModes: [ "ReadWriteOnce" ]
- storageClassName: "acstor-ephemeraldisk-nvme" # replace with the name of your storage class if different
+ storageClassName: acstor-ephemeraldisk-nvme # replace with the name of your storage class if different
resources: requests: storage: 1Gi ```
-When you change the storage size of your volumes, make sure the size is less than the available capacity of a single node's ephemeral disk. See [Check node ephemeral disk capacity](#check-node-ephemeral-disk-capacity).
+ When you change the storage size of your volumes, make sure the size is less than the available capacity of a single node's ephemeral disk. See [Check node ephemeral disk capacity](#check-node-ephemeral-disk-capacity).
1. Apply the YAML manifest file to deploy the pod.
When you change the storage size of your volumes, make sure the size is less tha
You've now deployed a pod that's using local NVMe as its storage, and you can use it for your Kubernetes workloads.
-## Manage storage pools
+## Manage volumes and storage pools
-Now that you've created your storage pool, you can expand or delete it as needed.
+In this section, you'll learn how to check the available capacity of ephemeral disk for a single node, and how to expand or delete a storage pool.
### Check node ephemeral disk capacity
Run the following command to check the available capacity of ephemeral disk for
```output $ kubectl get diskpool -n acstor NAME CAPACITY AVAILABLE USED RESERVED READY AGE
-ephemeraldisk-temp-diskpool-jaxwb 75660001280 75031990272 628011008 560902144 True 21h
-ephemeraldisk-temp-diskpool-wzixx 75660001280 75031990272 628011008 560902144 True 21h
-ephemeraldisk-temp-diskpool-xbtlj 75660001280 75031990272 628011008 560902144 True 21h
+ephemeraldisk-nvme-diskpool-jaxwb 75660001280 75031990272 628011008 560902144 True 21h
+ephemeraldisk-nvme-diskpool-wzixx 75660001280 75031990272 628011008 560902144 True 21h
+ephemeraldisk-nvme-diskpool-xbtlj 75660001280 75031990272 628011008 560902144 True 21h
``` In this example, the available capacity of ephemeral disk for a single node is `75031990272` bytes or 69 GiB.
storage Use Container Storage With Local Nvme Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-nvme-replication.md
Title: Use Azure Container Storage Preview with local NVMe replication
-description: Configure Azure Container Storage for use with Ephemeral Disk using local NVMe on the Azure Kubernetes Service (AKS) cluster nodes. Create a storage pool with volume replication, create a persistent volume claim, and attach the persistent volume to a pod.
+description: Configure Azure Container Storage for use with Ephemeral Disk using local NVMe on the Azure Kubernetes Service (AKS) cluster nodes. Create a storage pool with volume replication, create a volume, and deploy a pod.
Previously updated : 06/20/2024 Last updated : 06/27/2024
Follow these steps to create a storage pool using local NVMe with replication. A
apiVersion: containerstorage.azure.com/v1 kind: StoragePool metadata:
- name: nvme
+ name: ephemeraldisk-nvme
namespace: acstor spec: poolType:
Follow these steps to create a storage pool using local NVMe with replication. A
When storage pool creation is complete, you'll see a message like: ```output
- storagepool.containerstorage.azure.com/nvme created
+ storagepool.containerstorage.azure.com/ephemeraldisk-nvme created
```
- You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **nvme**.
+ You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **ephemeraldisk-nvme**.
```azurecli-interactive kubectl describe sp <storage-pool-name> -n acstor
Run `kubectl get sc` to display the available storage classes. You should see a
```output $ kubectl get sc | grep "^acstor-" acstor-azuredisk-internal disk.csi.azure.com Retain WaitForFirstConsumer true 65m
-acstor-ephemeraldisk containerstorage.csi.azure.com Delete WaitForFirstConsumer true 2m27s
+acstor-ephemeraldisk-nvme containerstorage.csi.azure.com Delete WaitForFirstConsumer true 2m27s
``` > [!IMPORTANT]
A persistent volume claim (PVC) is used to automatically provision storage based
requests: storage: 100Gi ```
+
+ When you change the storage size of your volumes, make sure the size is less than the available capacity of a single node's ephemeral disk. See [Check node ephemeral disk capacity](#check-node-ephemeral-disk-capacity).
1. Apply the YAML manifest file to create the PVC.
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
name: ephemeralpv ```
-When you change the storage size of your volumes, make sure the size is less than the available capacity of a single node's ephemeral disk. See [Check node ephemeral disk capacity](#check-node-ephemeral-disk-capacity).
- 1. Apply the YAML manifest file to deploy the pod. ```azurecli-interactive
When you change the storage size of your volumes, make sure the size is less tha
You've now deployed a pod that's using local NVMe with volume replication, and you can use it for your Kubernetes workloads.
-## Manage persistent volumes and storage pools
+## Manage volumes and storage pools
-Now that you've created a persistent volume, you can detach and reattach it as needed. You can also expand or delete a storage pool.
+In this section, you'll learn how to check the available capacity of ephemeral disk for a single node, how to detach and reattach a persistent volume, and how to expand or delete a storage pool.
### Check node ephemeral disk capacity
Run the following command to check the available capacity of ephemeral disk for
```output $ kubectl get diskpool -n acstor NAME CAPACITY AVAILABLE USED RESERVED READY AGE
-ephemeraldisk-temp-diskpool-jaxwb 75660001280 75031990272 628011008 560902144 True 21h
-ephemeraldisk-temp-diskpool-wzixx 75660001280 75031990272 628011008 560902144 True 21h
-ephemeraldisk-temp-diskpool-xbtlj 75660001280 75031990272 628011008 560902144 True 21h
+ephemeraldisk-nvme-diskpool-jaxwb 75660001280 75031990272 628011008 560902144 True 21h
+ephemeraldisk-nvme-diskpool-wzixx 75660001280 75031990272 628011008 560902144 True 21h
+ephemeraldisk-nvme-diskpool-xbtlj 75660001280 75031990272 628011008 560902144 True 21h
``` In this example, the available capacity of ephemeral disk for a single node is `75031990272` bytes or 69 GiB.
storage Use Container Storage With Temp Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-temp-ssd.md
Title: Use Azure Container Storage Preview with temp SSD
-description: Configure Azure Container Storage for use with Ephemeral Disk using temp SSD on the Azure Kubernetes Service (AKS) cluster nodes. Create a storage pool, select a storage class, and deploy a pod with a generic ephemeral volume.
+description: Configure Azure Container Storage for use with Ephemeral Disk using temp SSD on the Azure Kubernetes Service (AKS) cluster nodes. Create a storage pool, select a storage class, and deploy a pod.
Previously updated : 06/20/2024 Last updated : 06/27/2024
Follow these steps to create a storage pool using temp SSD.
apiVersion: containerstorage.azure.com/v1 kind: StoragePool metadata:
- name: ephemeraldisk
+ name: ephemeraldisk-temp
namespace: acstor spec: poolType:
Follow these steps to create a storage pool using temp SSD.
When storage pool creation is complete, you'll see a message like: ```output
- storagepool.containerstorage.azure.com/ephemeraldisk created
+ storagepool.containerstorage.azure.com/ephemeraldisk-temp created
```
- You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **ephemeraldisk**.
+ You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **ephemeraldisk-temp**.
```azurecli-interactive kubectl describe sp <storage-pool-name> -n acstor
Run `kubectl get sc` to display the available storage classes. You should see a
```output $ kubectl get sc | grep "^acstor-" acstor-azuredisk-internal disk.csi.azure.com Retain WaitForFirstConsumer true 65m
-acstor-ephemeraldisk containerstorage.csi.azure.com Delete WaitForFirstConsumer true 2m27s
+acstor-ephemeraldisk-temp containerstorage.csi.azure.com Delete WaitForFirstConsumer true 2m27s
``` > [!IMPORTANT]
Create a pod using [Fio](https://github.com/axboe/fio) (Flexible I/O Tester) for
type: my-ephemeral-volume spec: accessModes: [ "ReadWriteOnce" ]
- storageClassName: "acstor-ephemeraldisk-nvme" # replace with the name of your storage class if different
+ storageClassName: acstor-ephemeraldisk-temp # replace with the name of your storage class if different
resources: requests: storage: 1Gi ```
-When you change the storage size of your volumes, make sure the size is less than the available capacity of a single node's temp disk. See [Check node temp disk capacity](#check-node-temp-disk-capacity).
+ When you change the storage size of your volumes, make sure the size is less than the available capacity of a single node's temp disk. See [Check node temp disk capacity](#check-node-temp-disk-capacity).
1. Apply the YAML manifest file to deploy the pod.
When you change the storage size of your volumes, make sure the size is less tha
You've now deployed a pod that's using temp SSD as its storage, and you can use it for your Kubernetes workloads.
-## Manage storage pools
+## Manage volumes and storage pools
-Now that you've created your storage pool, you can expand or delete it as needed.
+In this section, you'll learn how to check the available capacity of ephemeral disk for a single node, and how to expand or delete a storage pool.
### Check node temp disk capacity
storage File Sync Replace Drive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-replace-drive.md
Title: Replace a drive on an Azure File Sync server
-description: How to replace a drive on an Azure File Sync server due to hardware decommissioning, optimization or end of support.
+description: Learn how to replace a drive on an Azure File Sync server because of hardware decommissioning, optimization, or end of support.
# Replace a drive on an Azure File Sync server
-This article explains how to replace an existing drive that hosts one or more Azure File Sync server endpoints, either on an on-premises Windows Server or on a virtual machine (VM) in the cloud. This could be because the drive is failing, or because you want to optimize and balance resources by using a different size or type of drive. Some of the steps will differ slightly depending on whether your Azure File Sync registered server is located on-premises or in Azure.
-
-> [!Important]
-> Replacing a drive always involves some amount of downtime for users. We recommend following the steps in this article. If you simply recreate the drive and restart the storage sync service without first deleting the server endpoints, then the server will automatically throw away the sync database.
+This article explains how to replace an existing drive that hosts one or more Azure File Sync server endpoints, either on an on-premises Windows Server installation or on a virtual machine (VM) in the cloud. This replacement could be because the drive is failing, or because you want to optimize and balance resources by using a different size or type of drive. Some of the steps will differ slightly, depending on whether your Azure File Sync registered server is located on-premises or in Azure.
+> [!IMPORTANT]
+> Replacing a drive always involves some amount of downtime for users. We recommend following the steps in this article. If you simply re-create the drive and restart the storage sync service without first deleting the server endpoints, the server will automatically throw away the sync database.
## Step 1: Create a temporary VM with new server endpoints
-Create a temporary VM (Server B) that's as close as possible to your registered server (Server A). If your registered server is on-premises, create a VM on premises. If your registered server is in the cloud, create a VM in the cloud, preferably in the same region as your registered server.
-
-Then, [create the server endpoints](file-sync-server-endpoint-create.md) on Server B. Enable cloud tiering. Temporarily set the volume free space policy to 99% in order to tier as many files as possible to the cloud.
+Create a temporary VM (Server B) that's as close as possible to your registered server (Server A). If your registered server is on-premises, create a VM on-premises. If your registered server is in the cloud, create a VM in the cloud, preferably in the same region as your registered server.
+Then, [create the server endpoints](file-sync-server-endpoint-create.md) on Server B. Enable cloud tiering. Temporarily set the volume free space policy to 99% so that you can tier as many files as possible to the cloud.
## Step 2: Copy data to the temporary VM
-Use Robocopy, an SMB copy utility that's built into Windows, to copy the data from Server A to Server B. Run the following command from the Windows command line on Server A.
+Use Robocopy, a Server Message Block (SMB) copy tool that's built into Windows, to copy the data from Server A to Server B. Run the following command from the Windows command line on Server A:
```console robocopy <Server A SourcePath> <Server B Dest.Path> /MT:16 /R:2 /W:1 /COPYALL /MIR /DCOPY:DAT /XA:O /B /IT /UNILOG:RobocopyLog.txt
robocopy <Server A SourcePath> <Server B Dest.Path> /MT:16 /R:2 /W:1 /COPYALL /M
## Step 3: Transition users to the temporary VM
-Removing user access to your server endpoints will cause downtime. To minimize downtime, perform these steps as quickly as possible.
-1. Remove SMB access to the server endpoints on Server A. Don't delete the server endpoints yet.
-2. On Server A, change the startup type of the Storage Sync Agent Service from Automatic to Disabled, and then put it in the Stopped state.
-3. Run Robocopy again to copy any changes that happened since the last run. From Server A, run:
-
+Removing user access to your server endpoints causes downtime. To minimize downtime, perform these steps as quickly as possible:
+
+1. Remove SMB access to the server endpoints on Server A. Don't delete the server endpoints yet.
+2. On Server A, change the startup type of the Storage Sync Agent Service from **Automatic** to **Disabled**, and then put it in the **Stopped** state.
+3. Run Robocopy again to copy any changes that happened since the last run. From Server A, run:
+
```console
- robocopy <SourcePath> <Dest.Path> /MT:16 /R:2 /W:1 /COPYALL /MIR /DCOPY:DAT /XA:O /B /IT /UNILOG:RobocopyLog.txt
+ robocopy <SourcePath> <Dest.Path> /MT:16 /R:2 /W:1 /COPYALL /MIR /DCOPY:DAT /XA:O /B /IT /UNILOG:RobocopyLog.txt
```
-5. Enable SMB access to the server endpoints on Server B.
+
+4. Enable SMB access to the server endpoints on Server B.
Users should now be able to access the file share from the temporary VM (Server B).
-## Step 4: Delete old server endpoints and replace drive
+## Step 4: Delete old server endpoints and replace the drive
When you're sure that user access is restored, [delete the server endpoints](file-sync-server-endpoint-delete.md) and replace the drive on Server A. Make sure the drive letter of the replaced drive is the same as it was before the replacement. ## Step 5: Create new server endpoints and copy data to the new drive
-Re-create the server endpoints on Server A. Enable cloud tiering. Temporarily set the volume free space policy to 99% in order to tier as many files as possible to the cloud.
+Re-create the server endpoints on Server A. Enable cloud tiering. Temporarily set the volume free space policy to 99% so that you can tier as many files as possible to the cloud.
-Use Robocopy to copy the data to the new drive on Server A. Run the following command from the Windows command line on Server B.
+Use Robocopy to copy the data to the new drive on Server A. Run the following command from the Windows command line on Server B:
```console robocopy <Server B SourcePath> <Server A Dest.Path> /MT:16 /R:2 /W:1 /COPYALL /MIR /DCOPY:DAT /XA:O /B /IT /UNILOG:RobocopyLog.txt ```
-## Step 6: Restore user access to registered server
+## Step 6: Restore user access to the registered server
-Removing user access to your server endpoints on the temporary VM will cause downtime. To minimize downtime, perform these steps as quickly as possible.
+Removing user access to your server endpoints on the temporary VM causes downtime. To minimize downtime, perform these steps as quickly as possible:
+
+1. Remove SMB access to the server endpoints on Server B. Don't delete the server endpoints yet.
+2. Run Robocopy again to copy any changes that happened since the last run. From Server B, run:
-1. Remove SMB access to the server endpoints on Server B. Don't delete the server endpoints yet.
-2. Run Robocopy again to copy any changes that happened since the last run. From Server B, run:
-
```console
- robocopy <SourcePath> <Dest.Path> /MT:16 /R:2 /W:1 /COPYALL /MIR /DCOPY:DAT /XA:O /B /IT /UNILOG:RobocopyLog.txt
+ robocopy <SourcePath> <Dest.Path> /MT:16 /R:2 /W:1 /COPYALL /MIR /DCOPY:DAT /XA:O /B /IT /UNILOG:RobocopyLog.txt
```
-3. On Server A, change the startup type of the Storage Sync Agent Service from Disabled to Automatic, and then put it in the Started state.
-4. Enable SMB access to the server endpoints on Server A.
-5. Sign into the Azure portal. Navigate to the sync group and verify that the cloud endpoint is syncing to the server endpoint(s) on Server A.
-Users should now be able to access the file share from your registered server. Remember to change your volume free space policy to a reasonable level such as 10-20%.
+
+3. On Server A, change the startup type of the Storage Sync Agent Service from **Disabled** to **Automatic**, and then put it in the **Started** state.
+4. Enable SMB access to the server endpoints on Server A.
+5. Sign in to the Azure portal. Go to the sync group and verify that the cloud endpoint is syncing to the server endpoints on Server A. Users should now be able to access the file share from your registered server.
+
+ Remember to change your volume free space policy to a reasonable level, such as 10-20%.
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you'd like to install and use PowerShell locally, you'll need the Azure PowerShell module Az version 7.0.0 or later. We recommend installing the latest available version. To find out which version of the Azure PowerShell module you're running, execute `Get-InstalledModule Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to log in to your Azure account. To use multifactor authentication, you'll need to supply your Azure tenant ID, such as `Login-AzAccount -TenantId <TenantId>`.
storage Storage Java How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-java-how-to-use-file-storage.md
Learn the basics developing Java applications that use Azure Files to store data
- Enumerate files and directories in an Azure file share - Upload, download, and delete a file ## Applies to | File share type | SMB | NFS |
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-powershell.md
For details about the permissions required for each Azure Storage operation on a
## Call PowerShell commands using Microsoft Entra credentials To use Azure PowerShell to sign in and run subsequent operations against Azure Storage using Microsoft Entra credentials, create a storage context to reference the storage account, and include the `-UseConnectedAccount` parameter.
storage Storage Powershell How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-powershell-how-to-use-queues.md
This how-to guide requires the Azure PowerShell (`Az`) module v12.0.0. Run `Get-
There are no PowerShell cmdlets for the data plane for queues. To perform data plane operations such as add a message, read a message, and delete a message, you have to use the .NET storage client library as it is exposed in PowerShell. You create a message object and then you can use commands such as `AddMessage` to perform operations on that message. This article shows you how to do that. ## Sign in to Azure
storage Storage Blobs Container Calculate Billing Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-billing-size-powershell.md
This script calculates the size of a container in Azure Blob storage for the pur
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] > [!NOTE] > This PowerShell script calculates the size of a container for billing purposes. If you are calculating container size for other purposes, see [Calculate the total size of a Blob storage container](../scripts/storage-blobs-container-calculate-size-powershell.md) for a simpler script that provides an estimate.
storage Storage Blobs Container Calculate Size Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
This script calculates the size of a container in Azure Blob storage by totaling the size of the blobs in the container. > [!IMPORTANT] > This CLI script provides an estimated size for the container and should not be used for billing calculations.
This script calculates the size of a container in Azure Blob storage by totaling
## Sample script ### Run the script
This script calculates the size of a container in Azure Blob storage by totaling
## Clean up resources ```azurecli az group delete --name $resourceGroup
storage Storage Blobs Container Calculate Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-powershell.md
This script calculates the size of all Azure Blob Storage containers in a storag
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] > [!IMPORTANT] > This PowerShell script provides an estimated size for the containers in an account and should not be used for billing calculations. For a script that calculates container size for billing purposes, see [Calculate the size of a Blob storage container for billing purposes](../scripts/storage-blobs-container-calculate-billing-size-powershell.md).
storage Storage Blobs Container Delete By Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md
This script first creates a few sample containers in Azure Blob storage, then deletes some of the containers based on a prefix in the container name. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This script first creates a few sample containers in Azure Blob storage, then de
## Clean up resources ```azurecli az group delete --name $resourceGroup
storage Storage Blobs Container Delete By Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md
This script deletes containers in Azure Blob storage based on a prefix in the co
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] ## Sample script
storage Storage Common Rotate Account Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-cli.md
This script creates an Azure Storage account, displays the new storage account's access keys, then renews (rotates) the keys. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This script creates an Azure Storage account, displays the new storage account's
## Clean up resources ```azurecli az group delete --name $resourceGroup
storage Storage Common Rotate Account Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-powershell.md
This script creates an Azure Storage account, displays the new storage account's
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] ## Sample script
storage Table Storage Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-quickstart-portal.md
This quickstart shows how to create tables and entities in the web-based Azure portal. This quickstart also shows you how to create an Azure storage account. ## Prerequisites
stream-analytics Quick Create Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-cli.md
In this quickstart, you will use Azure CLI to define a Stream Analytics job that
## Before you begin [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
stream-analytics Quick Create Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-resource-manager.md
Last updated 08/07/2023
In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Stream Analytics job. Once the job is created, you validate the deployment. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
stream-analytics Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-bicep.md
Last updated 05/17/2022
In this quickstart, you use Bicep to create an Azure Stream Analytics job. Once the job is created, you validate the deployment. ## Prerequisites
stream-analytics Stream Analytics Clean Up Your Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-clean-up-your-job.md
When you stop a job, the resources are deprovisioned and it stops processing eve
## Stop or delete a job using PowerShell To stop a job using PowerShell, use the [Stop-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/stop-azstreamanalyticsjob) cmdlet. To delete a job using PowerShell, use the [Remove-AzStreamAnalyticsJob](/powershell/module/az.streamanalytics/Remove-azStreamAnalyticsJob) cmdlet.
stream-analytics Stream Analytics Job Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-diagnostic-logs.md
Stream Analytics offers two types of logs:
> [!NOTE] > You can use services like Azure Storage, Azure Event Hubs, and Azure Monitor logs to analyze nonconforming data. You are charged based on the pricing model for those services. ## Debugging using activity logs
stream-analytics Stream Analytics Monitor And Manage Jobs Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-monitor-and-manage-jobs-use-powershell.md
# Monitor and manage Stream Analytics jobs with Azure PowerShell cmdlets Learn how to monitor and manage Stream Analytics resources with Azure PowerShell cmdlets and PowerShell scripting that execute basic Stream Analytics tasks. ## Prerequisites for running Azure PowerShell cmdlets for Stream Analytics * Create an Azure Resource Group in your subscription. The following is a sample Azure PowerShell script. For Azure PowerShell information, see [Install and configure Azure PowerShell](/powershell/azure/);
synapse-analytics Restore Sql Pool From Deleted Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/restore-sql-pool-from-deleted-workspace.md
In this article, you learn how to restore a dedicated SQL pool in Azure Synapse
## Before you begin ## Restore the SQL pool from the dropped workspace
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-cli.md
The Azure CLI is Azure's command-line experience for managing Azure resources. Y
In this quickstart, you learn to create a Synapse workspace by using the Azure CLI. ## Prerequisites
synapse-analytics Quickstart Deployment Template Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-deployment-template-workspaces.md
Last updated 02/04/2022
This Azure Resource Manager (ARM) template will create an Azure Synapse workspace with underlying Data Lake Storage. The Azure Synapse workspace is a securable collaboration boundary for analytics processes in Azure Synapse Analytics. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Last updated 11/28/2022
-# Azure Synapse Runtime for Apache Spark 3.2 (End of Support announced)
+# Azure Synapse Runtime for Apache Spark 3.2 (deprecated)
-Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
+Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
-> [!IMPORTANT]
-> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023.
-> * End of Support announced runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
+> [!WARNING]
+> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.2
+> * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023.
+> * Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired and disabled as of July 8, 2024. After the End of Support date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace. > * **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.**
widgetsnbextension==3.5.2
## Migration between Apache Spark versions - support
-For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 please refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4, refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
The following table lists the runtime name, Apache Spark version, and release da
| Runtime name | Release date | Release stage | End of Support announcement date | End of Support effective date | | | || | |
-| [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | GA (as of Apr 8, 2024) | | |
+| [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | GA (as of Apr 8, 2024) | Q2 2025| Q1 2026|
| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q2/Q3 2024 | Q1 2025 |
-| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Support Announced__ | July 8, 2023 | July 8, 2024 |
+| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __deprecated__ | July 8, 2023 | July 8, 2024 |
| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __deprecated__ | January 26, 2023 | January 26, 2024 | | [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __deprecated__ | __July 29, 2022__ | __September 29, 2023__ |
synapse-analytics Create Data Warehouse Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
> [!IMPORTANT] > Creating a dedicated SQL pool (formerly SQL DW) may result in a new billable service. For more information, see [Azure Synapse Analytics pricing](https://azure.microsoft.com/pricing/details/sql-data-warehouse/). ## Sign in to Azure
synapse-analytics Pause And Resume Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
## Before you begin This quickstart assumes you already have a dedicated SQL pool (formerly SQL DW) that you can pause and resume. If you need to create one, you can use [Create and Connect - portal](create-data-warehouse-portal.md) to create a dedicated SQL pool (formerly SQL DW) called `mySampleDataWarehouse`.
synapse-analytics Pause And Resume Compute Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/pause-and-resume-compute-workspace-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
## Before you begin This quickstart assumes you already have a dedicated SQL pool that was created in a Synapse workspace that you can pause and resume. If you need, [Create an Azure Synapse workspace](../quickstart-create-workspace.md) and then [create a dedicated SQL pool using Synapse Studio](../quickstart-create-sql-pool-studio.md).
synapse-analytics Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-arm-template.md
This Azure Resource Manager template (ARM template) will create an dedicated SQL pool (formerly SQL DW) with Transparent Data Encryption enabled. Dedicated SQL pool (formerly SQL DW) refers to the enterprise data warehousing features that are generally available in Azure Synapse. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
synapse-analytics Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bicep.md
This Bicep file will create a dedicated SQL pool (formerly SQL DW) with Transparent Data Encryption enabled. Dedicated SQL pool (formerly SQL DW) refers to the enterprise data warehousing features that are generally available in Azure Synapse. ## Prerequisites
synapse-analytics Quickstart Scale Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
## Before you begin This quickstart assumes you already have a dedicated SQL pool (formerly SQL DW). If you need to create one, use [Create and Connect - portal](create-data-warehouse-portal.md) to create a dedicated SQL pool (formerly SQL DW) called `mySampleDataWarehouse`.
synapse-analytics Quickstart Scale Compute Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-workspace-powershell.md
If you don't have an Azure subscription, create a [free Azure account](https://a
## Before you begin This quickstart assumes you already have a dedicated SQL pool that was created in a Synapse workspace. If you need, [Create an Azure Synapse workspace](../quickstart-create-workspace.md) and then [create a dedicated SQL pool using Synapse Studio](../quickstart-create-sql-pool-studio.md).
synapse-analytics Sql Data Warehouse Reference Powershell Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-powershell-cmdlets.md
Many dedicated SQL pool administrative tasks can be managed using either Azure PowerShell cmdlets or REST APIs. Below are some examples of how to use PowerShell commands to automate common tasks in your dedicated SQL pool (formerly SQL DW). For some good REST examples, see the article [Manage scalability with REST](sql-data-warehouse-manage-compute-rest-api.md). > [!NOTE] > This article applies for standalone dedicated SQL pools (formerly SQL DW) and are not applicable to a dedicated SQL pool created in an Azure Synapse Analytics workspace. There are different PowerShell cmdlets to use for each, for example, use [Suspend-AzSqlDatabase](/powershell/module/az.sql/suspend-azsqldatabase) for a dedicated SQL pool (formerly SQL DW), but [Suspend-AzSynapseSqlPool](/powershell/module/az.synapse/suspend-azsynapsesqlpool) for a dedicated SQL pool in an Azure Synapse Workspace. For instructions to pause and resume a dedicated SQL pool created in an Azure Synapse Analytics workspace, see [Quickstart: Pause and resume compute in dedicated SQL pool in a Synapse Workspace with Azure PowerShell](pause-and-resume-compute-workspace-powershell.md). For more on the differences between dedicated SQL pool (formerly SQL DW) and dedicated SQL pools in Azure Synapse Workspaces, read [What's the difference between Azure Synapse (formerly SQL DW) and Azure Synapse Analytics Workspace](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/what-s-the-difference-between-azure-synapse-formerly-sql-dw-and/ba-p/3597772).
synapse-analytics Sql Data Warehouse Restore Active Paused Dw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-active-paused-dw.md
In this article, you learn how to restore an existing dedicated SQL pool (former
1. Make sure to [install Azure PowerShell](/powershell/azure/?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json).
- [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
+ [!INCLUDE [updated-for-az](~/reusable-content/ce-skilling/azure/includes/updated-for-az.md)]
1. Have an existing restore point that you want to restore from. If you want to create a new restore, see [the tutorial to create a new user-defined restore point](sql-data-warehouse-restore-points.md).
synapse-analytics Sql Data Warehouse Restore Deleted Dw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-deleted-dw.md
In this article, you learn to restore a dedicated SQL pool (formerly SQL DW) usi
## Before you begin **Verify your DTU capacity.** Each dedicated SQL pool (formerly SQL DW) is hosted by a [logical SQL server](/azure/azure-sql/database/logical-servers) (for example, myserver.database.windows.net) which has a default DTU quota. Verify that the server has enough remaining DTU quota for the database being restored. To learn how to calculate DTU needed or to request more DTU, see [Request a DTU quota change](sql-data-warehouse-get-started-create-support-ticket.md).
synapse-analytics Sql Data Warehouse Restore From Deleted Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md
In this article, you learn how to restore a dedicated SQL pool (formerly SQL DW)
## Before you begin ## Restore the SQL pool from the deleted server
synapse-analytics Sql Data Warehouse Restore From Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-geo-backup.md
In this article, you learn to restore your dedicated SQL pool (formerly SQL DW)
## Before you begin **Verify your DTU capacity.** Each dedicated SQL pool (formerly SQL DW) is hosted by a [logical SQL server](/azure/azure-sql/database/logical-servers) (for example, myserver.database.windows.net) which has a default DTU quota. Verify that the SQL server has enough remaining DTU quota for the database being restored. To learn how to calculate DTU needed or to request more DTU, see [Request a DTU quota change](sql-data-warehouse-get-started-create-support-ticket.md).
synapse-analytics Upgrade To Latest Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/upgrade-to-latest-generation.md
You can now seamlessly upgrade to the dedicated SQL pool (formerly SQL DW) Compu
### Before you begin - Sign in to the [Azure portal](https://portal.azure.com/). - Ensure that dedicated SQL pool (formerly SQL DW) is running - it must be to migrate to Gen2
WHERE idx.type_desc = 'CLUSTERED COLUMNSTORE';
## Restore from an Azure geographical region using PowerShell To recover a database, use the [Restore-AzSqlDatabase](/powershell/module/az.sql/restore-azsqldatabase?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) cmdlet.
synapse-analytics What Is A Data Warehouse Unit Dwu Cdwu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/what-is-a-data-warehouse-unit-dwu-cdwu.md
To change DWUs:
#### PowerShell To change the DWUs, use the [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) PowerShell cmdlet. The following example sets the service level objective to DW1000 for the database MySQLDW that is hosted on server MyServer.
synapse-analytics Resource Consumption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resource-consumption-models.md
To change DWUs:
#### PowerShell To change the DWUs, use the [Set-AzSqlDatabase](/powershell/module/az.sql/set-azsqldatabase) PowerShell cmdlet. The following example sets the service level objective to DW1000 for the database MySQLDW that is hosted on server MyServer.
synapse-analytics Synapse Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-service-identity.md
This article helps you understand managed identity (formerly known as Managed Service Identity/MSI) and how it works in Azure Synapse. ## Overview
time-series-insights How To Create Environment Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-create-environment-using-cli.md
This document will guide you through creating a new Time Series Insights Gen2 Environment. ## Prerequisites
time-series-insights Time Series Insights Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-customer-data-requests.md
Last updated 10/02/2020
Azure Time Series Insights is a managed cloud service with storage, analytics, and visualization components that make it easy to ingest, store, explore, and analyze billions of events simultaneously. To view, export, and delete personal data that may be subject to a data subject request, an Azure Time Series Insights tenant administrator can use either the Azure portal or the REST APIs. Using the Azure portal to service data subject requests, provides a less complex method to perform these operations that most users prefer.
Azure Time Series Insights considers personal data to be data associated with ad
A tenant administrator can delete customer data using the Azure portal. However, before you delete customer data through the portal, you should remove the user's access policies from the Time Series Insights environment within the Azure portal. For more information, read [Grant data access to a Time Series Insights environment using Azure portal](./concepts-access-policies.md).
Time Series Insights is integrated with the Policy blade in the Azure portal. Bo
Similarly to deleting data, a tenant administrator can view and export data stored in Time Series Insights from the Policy blade in the Azure portal. If you are a tenant administrator, you can view data access policies within the Time Series Insights environment in the Azure portal. For more information, read [Grant data access to a Time Series Insights environment using Azure portal](./concepts-access-policies.md).
time-series-insights Time Series Insights Manage Resources Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-resources-using-azure-resource-manager-template.md
A Resource Manager template is a JSON file that defines the infrastructure and c
The [timeseriesinsights-environment-with-eventhub](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.timeseriesinsights/timeseriesinsights-environment-with-eventhub) quickstart template is published on GitHub. This template creates an Azure Time Series Insights environment, a child event source configured to consume events from an Event Hub, and access policies that grant access to the environment's data. If an existing Event Hub isn't specified, one will be created with the deployment. ## Specify deployment template and parameters
traffic-manager Configure Multivalue Routing Method Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/configure-multivalue-routing-method-template.md
This article describes how to use an Azure Resource Manager template (ARM Template) to create a nested, Multivalue profile with the min-child feature. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
traffic-manager How To Add Endpoint Existing Profile Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/how-to-add-endpoint-existing-profile-template.md
This article describes how to use an Azure Resource Manager template (ARM Template) to add an external endpoint to an existing Traffic Manager profile. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
traffic-manager Quickstart Create Traffic Manager Profile Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-bicep.md
This quickstart describes how to use Bicep to create a Traffic Manager profile with external endpoints using the performance routing method. ## Prerequisites
traffic-manager Quickstart Create Traffic Manager Profile Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-cli.md
In this quickstart, you'll create two instances of a web application. Each of th
:::image type="content" source="./media/quickstart-create-traffic-manager-profile/environment-diagram.png" alt-text="Diagram of Traffic Manager deployment environment." lightbox="./media/quickstart-create-traffic-manager-profile/environment-diagram.png"::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
traffic-manager Quickstart Create Traffic Manager Profile Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-powershell.md
In this quickstart, you'll create two instances of a web application. Each of th
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
traffic-manager Quickstart Create Traffic Manager Profile Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM Template) to create a Traffic Manager profile with external endpoints using the performance routing method. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
traffic-manager Traffic Manager Powershell Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-powershell-websites-high-availability.md
This script creates a resource group, two app service plans, two web apps, a tra
If needed, install the Azure PowerShell using the instruction found in the [Azure PowerShell guide](/powershell/azure), and then run `Connect-AzAccount` to create a connection with Azure. ## Sample script [!code-powershell[main](../../../powershell_scripts/traffic-manager/direct-traffic-for-increased-application-availability/direct-traffic-for-increased-application-availability.ps1 "Route traffic for high availability")]
traffic-manager Traffic Manager Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-diagnostic-logs.md
Azure Traffic Manager resource logs can provide insight into the behavior of the
* This guide requires an Azure Storage account. To learn more, see [Create a storage account](../storage/common/storage-account-create.md). If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. ## Enable resource logging
traffic-manager Traffic Manager Endpoint Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-endpoint-types.md
Previously updated : 04/27/2023 Last updated : 06/27/2024
There are three types of endpoint supported by Traffic
* [**External endpoints**](#external-endpoints) are used for IPv4/IPv6 addresses, FQDNs, or for services hosted outside Azure. Theses services can either be on-premises or with a different hosting provider. * [**Nested endpoints**](#nested-endpoints) are used to combine Traffic Manager profiles to create more flexible traffic-routing schemes to support the needs of larger, more complex deployments.
-There's no restriction on how endpoints of different types are combined in a single Traffic Manager profile. Each profile can contain any mix of endpoint types.
+There are some restrictions on how endpoints of different types can be combined in a single Traffic Manager profile or nested profile hierarchy. You can't mix external endpoints that have targets of different types (domain name, IP address] or external endpoints that have IP addresses as targets with Azure endpoints.
The following sections describe each endpoint type in greater depth.
traffic-manager Traffic Manager Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-powershell-arm.md
Azure Resource Manager is the preferred management interface for services in Azure. Azure Traffic Manager profiles can be managed using Azure Resource Manager-based APIs and tools. ## Resource model
Each Traffic Manager profile is represented by a resource of type 'TrafficManage
## Setting up Azure PowerShell These instructions use Microsoft Azure PowerShell. The following article explains how to install and configure Azure PowerShell.
traffic-manager Traffic Manager Subnet Override Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-subnet-override-powershell.md
There are two types of routing profiles that support subnet overrides:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - For this guide you need an App Service and a Traffic Manager profile. To learn more, see [Create a Traffic Manager profile](./quickstart-create-traffic-manager-profile.md). If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
trusted-signing How To Sign Ci Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/how-to-sign-ci-policy.md
To complete the steps in this article, you need:
## Sign a CI policy
-1. ΓüáUnzip the Az.CodeSigning module to a folder.
1. ΓüáOpen [PowerShell 7](https://github.com/PowerShell/PowerShell/releases/latest).
-1. In the *Az.CodeSigning* folder, run this command:
-
- ```powershell
- Import-Module .\Az.CodeSigning.psd1
- ```
1. Optionally, you can create a *metadata.json* file that looks like this example:(`"Endpoint"` URI value must be a URI that aligns with the region where you created your Trusted Signing account and certificate profile when you set up these resources.)
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
Following is the list of supported images and no other marketplace images releas
| | sql2019-ws2019 | standard | | | sql2019-ws2019 | standard-gen2| |microsoftazuresiterecovery | process-server | windows-2012-r2-datacenter |
+|microsoft-dvsm | dsvm-windows </br> dsvm-win-2019 </br> dsvm-win-2022 | * </br> * </br> * |
#### Supported Linux OS versions
Following is the list of supported images and no other marketplace images releas
|microsoftcblmariner | cbl-mariner | cbl-mariner-1 </br> 1-gen2 </br> cbl-mariner-2 </br> cbl-mariner-2-gen2. | | |microsoft-aks | aks |aks-engine-ubuntu-1804-202112 | | |microsoft-dsvm |aml-workstation | ubuntu-20, ubuntu-20-gen2 | |
+|microsoft-dsvm | aml-workstation | ubuntu |
+|| ubuntu-hpc | 1804, 2004-preview-ndv5, 2004, 2204, 2204-preview-ndv5 |
+|| ubuntu-2004 | 2004, 2004-gen2 |
|redhat | rhel| 7*,8*,9* | 74-gen2 | |redhat | rhel-ha | 8* | 8.1, 81_gen2 | |redhat | rhel-raw | 7*,8*,9* | |
Following is the list of supported images and no other marketplace images releas
|redhat | rhel-sap-ha| 7*, 8* | 7.5| |redhat | rhel-sap-apps | 90sapapps-gen2 | |redhat | rhel-sap-ha | 90sapha-gen2 |
+|redhat | rhel-byos | rhel-lvm88, rhel-lvm88-gent2, rhel-lvm92, rhel-lvm92-gen2 |
+|| rhel-ha | 9_2, 9_2-gen2 |
+|| rhel-sap-apps | 9_0, 90sapapps-gen2, 9_2, 92sapapps-gen2 |
+|| rhel-sap-ha | 9_2, 92sapha-gen2 |
|suse | opensuse-leap-15-* | gen* | |suse | sles-12-sp5-* | gen* | |suse | sles-sap-12-sp5* |gen* |
Following is the list of supported images and no other marketplace images releas
| | sles-sap-byos | 12-sp4, 12-sp4-gen2, gen2-12-sp4 | | | sles-sapcal | 12-sp3 | | | sles-standard | 12-sp4-gen2 |
+|| sle-hpc-15-sp4-byos | gen1, gen2 |
+|| sle-hpc-15-sp5-byos | gen1, gen 2 |
+|| sle-hpc-15-sp5 | gen1, gen 2 |
+|| sles-15-sp4-byos | gen1, gen2 |
+|| sles-15-sp4-chost-byos | gen1, gen 2|
+|| sles-15-sp4-hardened-byos | gen1, gen2 |
+|| sles-15-sp5-basic | gen1, gen2 |
+|| sles-15-sp5-byos | gen1, gen2|
+|| sles-15-sp5-chost-byos | gen1, gen2 |
+|| sles-15-sp5-hardened-byos | gen1, gen2 |
+|| sles-15-sp5-sapcal | gen1, gen2 |
+|| sles-15-sp5 | gen1, gen2 |
+|| sles-sap-15-sp4-byos | gen1, gen2 |
+|| sles-sap-15-sp4-hardened-byos | gen1, gen2 |
+|| sles-sap-15-sp5-byos | gen1, gen2 |
+|| sles-sap-15-sp5-hardened-byos| gen1, gen2 |
|oracle | oracle-linux | 7*, ol7*, ol8*, ol9*, ol9-lvm*, 8, 8-ci, 81, 81-ci, 81-gen2 | | | oracle-database | oracle_db_21 | | | oracle-database-19-3 | oracle-database-19-0904 |
Following is the list of supported images and no other marketplace images releas
| |centos-lvm | 7-lvm, 8-lvm | | |centos-ci | 7-ci | | |centos-lvm | 7-lvm-gen2 |
+|almalinux | almalinux </br> | 8-gen1, 8-gen2, 9-gen1, 9-gen2|
+||almalinux-x86_64 | 8-gen1, :8-gen2, 9-gen1, 9-gen2
+||almalinux-hpc | 8_6-hpc, 8_6-hpc-gen2 |
+| aviatrix-systems | aviatrix-bundle-payg | aviatrix-enterprise-bundle-byol|
+|| aviatrix-copilot |avx-cplt-byol-01, avx-cplt-byol-02 |
+|| aviatrix-companion-gateway-v9 | aviatrix-companion-gateway-v9|
+|| aviatrix-companion-gateway-v10 | aviatrix-companion-gateway-v10,</br> aviatrix-companion-gateway-v10u|
+|| aviatrix-companion-gateway-v12 | aviatrix-companion-gateway-v12|
+|| aviatrix-companion-gateway-v13 | aviatrix-companion-gateway-v13,</br> aviatrix-companion-gateway-v13u|
+|| aviatrix-companion-gateway-v14 | aviatrix-companion-gateway-v14,</br> aviatrix-companion-gateway-v14u |
+|| aviatrix-companion-gateway-v16 | aviatrix-companion-gateway-v16|
+| cncf-upstream | capi | ubuntu-1804-gen1, ubuntu-2004-gen1, ubuntu-2204-gen1 |
+| credativ | debian | 9, 9-backports |
+| debian | debian-10 | 10, 10-gen2,</br> 10-backports, </br> 10-backports-gen2 |
+|| debian-10-daily | 10, 10-gen2,</br> 10-backports,</br> 10-backports-gen2|
+|| debian-11 | 11, 11-gen2,</br> 11-backports, </br> 11-backports-gen2 |
+|| debian-11-daily | 11, 11-gen2,</br> 11-backports, </br> 11-backports-gen2 |
+ ### Custom images
We support VMs created from customized images (including images uploaded to [Azu
|**Linux operating system**| ||
- |CentOS 7, 8|
+ |CentOS 7 |
|Oracle Linux 7.x, 8x| |Red Hat Enterprise 7, 8, 9| |SUSE Linux Enterprise Server 12.x, 15.0-15.4|
virtual-desktop App Attach Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-setup.md
Here's how to add an MSIX or Appx image as an app attach package using the [Az.D
$app = Import-AzWvdAppAttachPackageInfo @parameters ```
-4. Check you only have one object in the application properties by running the following command:
+4. Check you only have one object in the application properties by running the following commands:
```azurepowershell $app | FL * ```
- *Optional*: if you have more than one object in the output, for example an x64 and an x86 version of the same application, you can use the parameter `PackageFullName` to specify which application you want to add by running the following command:
+ *Optional*: if you have more than one object in the output, for example an x64 and an x86 version of the same application, you can use the parameter `PackageFullName` to specify which application you want to add by running the following commands:
```azurepowershell # Specify the package full name
Here's how to add an MSIX or Appx image as an app attach package using the [Az.D
There's no output when the package is added successfully.
-6. You can verify the package is added by running the following command:
+6. You can verify the package is added by running the following commands:
```azurepowershell $parameters = @{
Here's how to assign an application package to host pools as well as groups and
> [!IMPORTANT] > The host pool IDs you specify each time will overwrite any existing assignments. If you want to add or remove a host pool to or from an existing list of host pools, you need to specify all the host pools you want to assign the application to.
-1. In the same PowerShell session, get the resource IDs of the host pool(s) you want to assign the application to and add them to an array to by running the following command:
+1. In the same PowerShell session, get the resource IDs of the host pool(s) you want to assign the application to and add them to an array to by running the following commands:
```azurepowershell # Add a comma-separated list of host pools names
Here's how to assign an application package to host pools as well as groups and
} ```
-1. Once you have the resource IDs of the host pool(s), you can assign the application package to them by running the following command:
+1. Once you have the resource IDs of the host pool(s), you can assign the application package to them by running the following commands:
```azurepowershell $parameters = @{
Here's how to assign an application package to host pools as well as groups and
Update-AzWvdAppAttachPackage @parameters ```
-1. To unassign the application package from all host pools, you can pass an empty array of host pools by running the following command:
+1. To unassign the application package from all host pools, you can pass an empty array of host pools by running the following commands:
```azurepowershell $parameters = @{
Here's how to assign an application to groups and users using the [Az.DesktopVir
1. Get the object ID of the groups or users you want to add to or remove from the application and add them to an array to by using one of the following examples. We recommend you assign applications to groups.
- 1. Get the object ID of the group or groups and add them to an array to by running the following command. This example uses the group display name:
+ 1. Get the object ID of the group or groups and add them to an array to by running the following commands. This example uses the group display name:
```azurepowershell # Add a comma-separated list of group names
Here's how to assign an application to groups and users using the [Az.DesktopVir
} ```
- 1. Get the object ID of the user(s) and add them to an array to by running the following command. This example uses the user principal name (UPN):
+ 1. Get the object ID of the user(s) and add them to an array to by running the following commands. This example uses the user principal name (UPN):
```azurepowershell # Add a comma-separated list of user principal names
Here's how to assign an application to groups and users using the [Az.DesktopVir
Connect-MgGraph -Scopes 'User.Read.All' # Create an array and add the ID for each user
- $Ids = @()
+ $userIds = @()
foreach ($user in $users) {
- $Ids += (Get-MgUser | ? UserPrincipalName -eq $user).Id
+ $userIds += (Get-MgUser | ? UserPrincipalName -eq $user).Id
} ```
-1. Once you have the object IDs of the users or groups, you can add them to or remove them from the application by using one of the following examples, which assigns the [Desktop Virtualization User](rbac.md#desktop-virtualization-user) RBAC role. You can also assign the Desktop Virtualization User RBAC role to your groups or users using the [New-AzRoleAssignment](../role-based-access-control/role-assignments-powershell.md) cmdlet.
+1. Once you have the object IDs of the users or groups, you can add them to or remove them from the application by using one of the following examples, which assigns the [Desktop Virtualization User](rbac.md#desktop-virtualization-user) RBAC role.
- 1. To add the groups or users to the application, run the following command:
+ 1. To add the groups or users to the application, run the following commands:
```azurepowershell $parameters = @{
- Name = '<AppName>'
- ResourceGroupName = '<ResourceGroupName>'
- Location = '<AzureRegion>'
- PermissionsToAdd = $Ids
+ Name = '<AppName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ $appAttachPackage = Get-AzWvdAppAttachPackage @parameters
+
+ foreach ($userId in $userIds) {
+ New-AzRoleAssignment -ObjectId $userId -RoleDefinitionName "Desktop Virtualization User" -Scope $appAttachPackage.Id
}
-
- Update-AzWvdAppAttachPackage @parameters
```
- 1. To remove the groups or users to the application, run the following command:
+ 1. To remove the groups or users to the application, run the following commands:
```azurepowershell $parameters = @{
- Name = '<AppName>'
- ResourceGroupName = '<ResourceGroupName>'
- Location = '<AzureRegion>'
- PermissionsToRemove = $objectIds
+ Name = '<AppName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ $appAttachPackage = Get-AzWvdAppAttachPackage @parameters
+
+ foreach ($userId in $userIds) {
+ Remove-AzRoleAssignment -ObjectId $userId -RoleDefinitionName "Desktop Virtualization User" -Scope $appAttachPackage.Id
}
-
- Update-AzWvdAppAttachPackage @parameters
```
Here's how to change a package's registration type and state using the [Az.Deskt
```azurepowershell $parameters = @{
- FullName = '<FullName>'
- HostPoolName = '<HostPoolName>'
+ Name = '<Name>'
ResourceGroupName = '<ResourceGroupName>' Location = '<AzureRegion>' IsRegularRegistration = $true
Here's how to change a package's registration type and state using the [Az.Deskt
```azurepowershell $parameters = @{ Name = '<Name>'
- HostPoolName = '<HostPoolName>'
ResourceGroupName = '<ResourceGroupName>' Location = '<AzureRegion>' IsActive = $true
Here's how to add an application from the package you added in this article to a
Here's how to add an application from the package you added in this article to a RemoteApp application group using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization/) PowerShell module.
-1. In the same PowerShell session, if there are multiple applications in the package, you need to get the application ID of the application you want to add from the package by running the following command:
+1. In the same PowerShell session, if there are multiple applications in the package, you need to get the application ID of the application you want to add from the package by running the following commands:
```azurepowershell Write-Host "These are the application IDs available in the package. Many packages only contain one application." -ForegroundColor Yellow
Here's how to add an application from the package you added in this article to a
New-AzWvdApplication @parameters ```
-1. Verify the list of applications in the application group by running the following command:
+1. Verify the list of applications in the application group by running the following commands:
```azurepowershell $parameters = @{
Here's how to update an existing package using the Azure portal:
Here's how to update an existing package using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization/) PowerShell module.
-1. In the same PowerShell session, get the properties of the updated application and store them in a variable by running the following command:
+1. In the same PowerShell session, get the properties of the updated application and store them in a variable by running the following commands:
```azurepowershell # Get the properties of the application
Here's how to update an existing package using the [Az.DesktopVirtualization](/p
$app = Import-AzWvdAppAttachPackageInfo @parameters ```
-1. Check you only have one object in the application properties by running the following command:
+1. Check you only have one object in the application properties by running the following commands:
```azurepowershell $app | FL * ```
- If you have more than one object in the output, for example an x64 and an x86 version of the same application, you can use the parameter `PackageFullName` to specify which one you want to add by running the following command:
+ If you have more than one object in the output, for example an x64 and an x86 version of the same application, you can use the parameter `PackageFullName` to specify which one you want to add by running the following commands:
```azurepowershell # Specify the package full name
Here's how to add an MSIX package using the [Az.DesktopVirtualization](/powershe
[!INCLUDE [include-cloud-shell-local-powershell](includes/include-cloud-shell-local-powershell.md)]
-2. Get the properties of the application in the MSIX image you want to add and store them in a variable by running the following command:
+2. Get the properties of the application in the MSIX image you want to add and store them in a variable by running the following commands:
```azurepowershell # Get the properties of the MSIX image
Here's how to add an MSIX package using the [Az.DesktopVirtualization](/powershe
hp01/expandmsiximage ```
-3. Check you only have one object in the application properties by running the following command:
+3. Check you only have one object in the application properties by running the following commands:
```azurepowershell $app | FL * ```
- If you have more than one object in the output, for example an x64 and an x86 version of the same application, you can use the parameter `PackageFullName` to specify which one you want to add by running the following command:
+ If you have more than one object in the output, for example an x64 and an x86 version of the same application, you can use the parameter `PackageFullName` to specify which one you want to add by running the following commands:
```azurepowershell # Specify the package full name
Here's how to add an MSIX package using the [Az.DesktopVirtualization](/powershe
There's no output when the MSIX package is added successfully.
-5. You can verify the MSIX package is added by running the following command:
+5. You can verify the MSIX package is added by running the following commands:
```azurepowershell $parameters = @{
Here's how to change a package's registration type and state using the Azure por
Here's how to change a package's registration type and state using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization/) PowerShell module.
-1. In the same PowerShell session, get a list of MSIX packages on a host pool and their current registration type and state by running the following command:
+1. In the same PowerShell session, get a list of MSIX packages on a host pool and their current registration type and state by running the following commands:
```azurepowershell $parameters = @{
Here's how to add MSIX applications to an application group using the [Az.Deskto
New-AzWvdApplication @parameters ```
- 1. To add an MSIX application to a RemoteApp application group, if there are multiple applications in the package, you need to get the application ID of the application you want to add from the package by running the following command:
+ 1. To add an MSIX application to a RemoteApp application group, if there are multiple applications in the package, you need to get the application ID of the application you want to add from the package by running the following commands:
```azurepowershell Write-Host "These are the application IDs available in the package. Many packages only contain one application." -ForegroundColor Yellow
Here's how to add MSIX applications to an application group using the [Az.Deskto
New-AzWvdApplication @parameters ```
- Verify the list of applications in the application group by running the following command:
+ Verify the list of applications in the application group by running the following commands:
```azurepowershell $parameters = @{
Here's how to remove an MSIX package from your host pool using the Azure portal:
Here's how to remove applications using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization/) PowerShell module.
-1. In the same PowerShell session, get a list of MSIX packages on a host pool by running the following command:
+1. In the same PowerShell session, get a list of MSIX packages on a host pool by running the following commands:
```azurepowershell $parameters = @{
Here's how to remove applications using the [Az.DesktopVirtualization](/powershe
My App \\fileshare\Apps\MyApp\MyApp.cim hp01/MyApp_1.0.0.0_neutral__abcdef123ghij 1.0.0.0 ```
-1. Find the package you want to remove and use the value for the `Name` parameter, but remove the **host pool name** and `/` from the start. For example, `hp01/MyApp_1.0.0.0_neutral__abcdef123ghij` becomes `MyApp_1.0.0.0_neutral__abcdef123ghij`. Then remove the package by running the following command:
+1. Find the package you want to remove and use the value for the `Name` parameter, but remove the **host pool name** and `/` from the start. For example, `hp01/MyApp_1.0.0.0_neutral__abcdef123ghij` becomes `MyApp_1.0.0.0_neutral__abcdef123ghij`. Then remove the package by running the following commands:
```azurepowershell $parameters = @{
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
Title: Azure Virtual Desktop with Azure Stack HCI
+ Title: Azure Virtual Desktop for Azure Stack HCI
description: Learn about using Azure Virtual Desktop with Azure Stack HCI, enablng you to deploy session hosts where you need them.
Last updated 04/11/2024
-# Azure Virtual Desktop with Azure Stack HCI
+# Azure Virtual Desktop for Azure Stack HCI
> [!IMPORTANT]
-> Azure Virtual Desktop with Azure Stack HCI is currently in preview for Azure Government and Azure China. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure Virtual Desktop for Azure Stack HCI is currently in preview for Azure Government and Azure China. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Using Azure Virtual Desktop with Azure Stack HCI, you can deploy session hosts for Azure Virtual Desktop where you need them. If you already have an existing on-premises virtual desktop infrastructure (VDI) deployment, Azure Virtual Desktop with Azure Stack HCI can improve your experience. If you're already using Azure Virtual Desktop with your session hosts in Azure, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs.
+Using Azure Virtual Desktop for Azure Stack HCI, you can deploy session hosts for Azure Virtual Desktop where you need them. If you already have an existing on-premises virtual desktop infrastructure (VDI) deployment, Azure Virtual Desktop with Azure Stack HCI can improve your experience. If you're already using Azure Virtual Desktop with your session hosts in Azure, you can extend your deployment to your on-premises infrastructure to better meet your performance or data locality needs.
Azure Virtual Desktop service components, such as host pools, workspaces, and application groups are all deployed in Azure, but you can choose to deploy session hosts on Azure Stack HCI. As Azure Virtual Desktop with Azure Stack HCI isn't an Azure Arc-enabled service, it's not supported as a standalone service outside of Azure, in a multicloud environment, or on other Azure Arc-enabled servers. ## Benefits
-Using Azure Virtual Desktop with Azure Stack HCI, you can:
+Using Azure Virtual Desktop for Azure Stack HCI, you can:
- Improve performance for Azure Virtual Desktop users in areas with poor connectivity to the Azure public cloud by giving them session hosts closer to their location.
Finally, users can connect using the same [Remote Desktop clients](users/remote-
## Licensing and pricing
-To run Azure Virtual Desktop with Azure Stack HCI, you need to make sure you're licensed correctly and be aware of the pricing model. There are three components that affect how much it costs to run Azure Virtual Desktop with Azure Stack HCI:
+To run Azure Virtual Desktop for Azure Stack HCI, you need to make sure you're licensed correctly and be aware of the pricing model. There are three components that affect how much it costs to run Azure Virtual Desktop with Azure Stack HCI:
-- **User access rights.** The same licenses that grant access to Azure Virtual Desktop on Azure also apply to Azure Virtual Desktop with Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+- **User access rights.** The same licenses that grant access to Azure Virtual Desktop on Azure also apply to Azure Virtual Desktop for Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
- **Azure Stack HCI service fee.** Learn more at [Azure Stack HCI pricing](https://azure.microsoft.com/pricing/details/azure-stack/hci/). -- **Azure Virtual Desktop on Azure Stack HCI service fee.** This fee requires you to pay for each active virtual CPU (vCPU) for your Azure Virtual Desktop session hosts running on Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
+- **Azure Virtual Desktop for Azure Stack HCI service fee.** This fee requires you to pay for each active virtual CPU (vCPU) for your Azure Virtual Desktop session hosts running on Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
## Data storage
There are different classifications of data for Azure Virtual Desktop, such as c
Azure Virtual Desktop with Azure Stack HCI has the following limitations: -- You can't use some Azure Virtual Desktop features when session hosts running on Azure Stack HCI, such as:
-
- - [Azure Virtual Desktop Insights](insights.md)
- - [Session host scaling with Azure Automation](set-up-scaling-script.md)
- - [Per-user access pricing](licensing.md)
- - Each host pool must only contain session hosts on Azure or on Azure Stack HCI. You can't mix session hosts on Azure and on Azure Stack HCI in the same host pool. - Azure Stack HCI supports many types of hardware and on-premises networking capabilities, so performance and user density might vary compared to session hosts running on Azure. Azure Virtual Desktop's [virtual machine sizing guidelines](/windows-server/remote/remote-desktop-services/virtual-machine-recs) are broad, so you should use them for initial performance estimates and monitor after deployment.
virtual-desktop Clipboard Transfer Direction Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/clipboard-transfer-direction-data-types.md
To configure the clipboard using Intune, follow these steps. This process [deplo
| Value | Description | |--|--|
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="0"/>]]>` | Disable clipboard transfers from session host to client. |
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="1"/>]]>` | Allow plain text. |
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="2"/>]]>` | Allow plain text and images. |
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="3"/>]]>` | Allow plain text, images, and Rich Text Format. |
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="4"/>]]>` | Allow plain text, images, Rich Text Format, and HTML. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="0"/>]]>` | Disable clipboard transfers from session host to client. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="1"/>]]>` | Allow plain text. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="2"/>]]>` | Allow plain text and images. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="3"/>]]>` | Allow plain text, images, and Rich Text Format. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="4"/>]]>` | Allow plain text, images, Rich Text Format, and HTML. |
1. Select **Save** to add the row. Repeat the previous two steps to configure the clipboard in the other direction, if necessary, then once you configure the settings you want, select **Next**.
virtual-desktop Required Fqdn Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/required-fqdn-endpoint.md
description: A list of FQDNs and endpoints you must allow, ensuring your Azure V
Previously updated : 05/24/2024 Last updated : 06/26/2024 # Required FQDNs and endpoints for Azure Virtual Desktop
The following table is the list of FQDNs and endpoints your session host VMs nee
| `*.prod.warm.ingest.monitor.core.windows.net` | TCP | 443 | Agent traffic<br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor | | `catalogartifact.azureedge.net` | TCP | 443 | Azure Marketplace | AzureFrontDoor.Frontend | | `gcs.prod.monitoring.core.windows.net` | TCP | 443 | Agent traffic | AzureCloud |
-| `kms.core.windows.net` | TCP | 1688 | Windows activation | Internet |
| `azkms.core.windows.net` | TCP | 1688 | Windows activation | Internet | | `mrsglobalsteus2prod.blob.core.windows.net` | TCP | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud | | `wvdportalstorageblob.blob.core.windows.net` | TCP | 443 | Azure portal support | AzureCloud |
The following table lists optional FQDNs and endpoints that your session host vi
| `*.wvd.azure.us` | TCP | 443 | Service traffic | WindowsVirtualDesktop | | `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | TCP | 443 | Agent traffic<br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor | | `gcs.monitoring.core.usgovcloudapi.net` | TCP | 443 | Agent traffic | AzureCloud |
-| `kms.core.usgovcloudapi.net` | TCP | 1688 | Windows activation | Internet |
+| `azkms.core.usgovcloudapi.net ` | TCP | 1688 | Windows activation | Internet |
| `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | TCP | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud | | `wvdportalstorageblob.blob.core.usgovcloudapi.net` | TCP | 443 | Azure portal support | AzureCloud | | `169.254.169.254` | TCP | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A |
virtual-desktop Set Up Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-mfa.md
Here's what you need to get started:
Here's how to create a Conditional Access policy that requires multifactor authentication when connecting to Azure Virtual Desktop:
-1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator, security administrator, or Conditional Access administrator.
-1. In the search bar, type *Microsoft Entra Conditional Access* and select the matching service entry.
-1. From the overview, select **Create new policy**.
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](/entra/identity/role-based-access-control/permissions-reference#conditional-access-administrator).
+1. Browse to **Protection** > **Conditional Access** > **Policies**.
+1. Select **New policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments** > **Users**, select **0 users and groups selected**. 1. Under the **Include** tab, select **Select users and groups** and check **Users and groups**, then under **Select**, select **0 users and groups selected**.
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 05/21/2024 Last updated : 06/26/2024
A rollout may take several weeks before the agent is available in all environmen
| Release | Latest version | |--|--|
-| Production | 1.0.8431.2300 |
-| Validation | 1.0.9103.1000 |
+| Production | 1.0.9103.3700 |
+| Validation | 1.0.9103.3800 |
> [!TIP] > The Azure Virtual Desktop Agent is automatically installed when adding session hosts in most scenarios. If you need to install the agent manually, you can download it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
-## Version 1.0.9103.1000 (validation)
+## Version 1.0.9103.3800 (validation)
+
+*Published: June 2024*
+
+In this update, we've made the following changes:
+
+- General improvements and bug fixes.
+
+## Version 1.0.9103.3700
+
+*Published: June 2024*
+
+In this update, we've made the following changes:
+
+- General improvements and bug fixes.
+
+## Version 1.0.9103.2300
+
+*Published: June 2024*
+
+In this update, we've made the following changes:
+
+- General improvements and bug fixes.
+
+## Version 1.0.9103.1000
*Published: May 2024*
virtual-machine-scale-sets Disk Encryption Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-powershell.md
The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line or in scripts. This article shows you how to use Azure PowerShell to create and encrypt a Virtual Machine Scale Set. For more information on applying Azure Disk Encryption to a Virtual Machine Scale Set, see [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md). ## Create an Azure Key Vault enabled for disk encryption
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-rest-api.md
This article steps through using an ARM template to create a Virtual Machine Scale Set. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
If you don't have an Azure subscription, create a [free account](https://azure.m
## ARM template ARM templates let you deploy groups of related resources. In a single template, you can create the Virtual Machine Scale Set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
virtual-machine-scale-sets Quick Create Bicep Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-bicep-windows.md
A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the Virtual Machine Scale Set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the Virtual Machine Scale Set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with Bicep. ## Prerequisites
virtual-machine-scale-sets Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-cli.md
A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with the Azure CLI. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-machine-scale-sets Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-powershell.md
A Virtual Machine Scale Set allows you to deploy and manage a set of autoscaling
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create a scale set
virtual-machine-scale-sets Quick Create Template Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-linux.md
A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with an Azure Resource Manager template (ARM template). ARM templates let you deploy groups of related resources. In a single template, you can create the Virtual Machine Scale Set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
virtual-machine-scale-sets Quick Create Template Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-windows.md
A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with an Azure Resource Manager template (ARM template). ARM templates let you deploy groups of related resources. In a single template, you can create the Virtual Machine Scale Set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
When you create a scale set, you define the number of VM instances that you wish
> * Stress-test VM instances and trigger autoscale rules > * Autoscale back in as demand is reduced [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-machine-scale-sets Tutorial Connect To Instances Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-connect-to-instances-cli.md
A Virtual Machine Scale Set allows you to deploy and manage a set of virtual mac
> * List connection information > * Connect to individual instances using SSH [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-machine-scale-sets Tutorial Connect To Instances Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-connect-to-instances-powershell.md
A Virtual Machine Scale Set allows you to deploy and manage a set of virtual mac
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## List instances in a scale set If you don't have a scale set already created, see [Tutorial: Create and manage a Virtual Machine Scale Set with Azure PowerShell](tutorial-create-and-manage-powershell.md).
virtual-machine-scale-sets Tutorial Create And Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-cli.md
A Virtual Machine Scale Set allows you to deploy and manage a set of virtual mac
> * Scale out and in > * Stop, Start and restart VM instances [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-machine-scale-sets Tutorial Create And Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-powershell.md
A Virtual Machine Scale Set allows you to deploy and manage a set of virtual mac
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create a resource group An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a Virtual Machine Scale Set. Create a resource group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command. In this example, a resource group named *myResourceGroup* is created in the *EastUS* region.
virtual-machine-scale-sets Tutorial Install Apps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-cli.md
To run applications on virtual machine (VM) instances in a scale set, you first
> * Use the Azure Custom Script Extension > * Update a running application on a scale set [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-machine-scale-sets Tutorial Install Apps Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-powershell.md
To run applications on virtual machine (VM) instances in a scale set, you first
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## What is the Azure Custom Script Extension? The Custom Script Extension downloads and executes scripts on Azure VMs. This extension is useful for post deployment configuration, software installation, or any other configuration / management task. Scripts can be downloaded from Azure storage or GitHub, or provided to the Azure portal at extension run-time.
virtual-machine-scale-sets Tutorial Use Custom Image Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-cli.md
When you create a scale set, you specify an image to be used when the VM instanc
> * Create a scale set from a specialized image > * Share an image gallery [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-machine-scale-sets Tutorial Use Disks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-disks-powershell.md
Virtual Machine Scale Sets use disks to store the VM instance's operating system
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Default Azure disks
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-powershell.md
Throughout the lifecycle of a Virtual Machine Scale Set, you may need to run one
If you need to create a Virtual Machine Scale Set, you can [create a scale set with Azure PowerShell](quick-create-powershell.md). ## View information about a scale set To view the overall information about a scale set, use [Get-AzVmss](/powershell/module/az.compute/get-azvmss). The following example gets information about the scale set named *myScaleSet* in the *myResourceGroup* resource group. Enter your own names as follows:
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set-flex.md
To learn more about these modes, go to [Virtual Machine Scale Sets Orchestration
This content applies to the flexible orchestration mode. For uniform orchestration mode, go to [Associate a virtual machine scale set with uniform orchestration to a Capacity Reservation group](capacity-reservation-associate-virtual-machine-scale-set.md) -
-> [!IMPORTANT]
-> Capacity Reservations with virtual machine set using flexible orchestration is currently in general availability for Fault Domain equlas to 1.
-
-> [!IMPORTANT]
-> Capacity Reservations with virtual machine set using flexible orchestration is currently in Public Preview for Fault Domain greater than 1. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> During the preview, always attach reserved capacity during creation of new scale sets using flexible orchestration mode. There are known issues attaching capacity reservations to existing scale sets using flexible orchestration. Microsoft will update this page as more options become enabled during preview.
- ## Associate a new virtual machine scale set to a Capacity Reservation group **Option 1: Add to Virtual Machine profile** - If the Scale Set with flexible orchestration includes a VM profile, add the Capacity Reservation group property to the profile during Scale Set creation. Follow the same process used for a Scale Set using uniform orchestration. For sample code, see [Associate a virtual machine scale set with uniform orchestration to a Capacity Reservation group](capacity-reservation-associate-virtual-machine-scale-set.md).
virtual-machines Disks Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-cross-tenant-customer-managed-keys.md
If you have questions about cross-tenant customer-managed keys with managed disk
- This feature doesn't support Ultra Disks or Azure Premium SSD v2 managed disks. - This feature isn't available in Microsoft Azure operated by 21Vianet or Government clouds. ## Create a disk encryption set
To use the Azure portal, sign in to the portal and follow these steps.
To use Azure PowerShell, install the latest Az module or the Az.Storage module. For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell). In the script below, `-FederatedClientId` should be the application ID (client ID) of the multi-tenant application. You'll also need to provide the subscription ID, resource group name, and identity name.
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-overview.md
ms.devlang: azurecli
# Introduction to the Azure Desired State Configuration extension handler
+> [!NOTE]
+> Before you enable DSC Extension, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+ The Azure VM Extension for Azure virtual machines (VM) and the associated extensions are part of Microsoft Azure infrastructure services. Azure VM extensions are software components that extend VM functionality and simplify various VM management operations. The primary use for the Azure Desired State Configuration (DSC) extension for Windows PowerShell is to bootstrap a VM to the
virtual-machines Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-dns.md
# DNS Name Resolution options for Linux virtual machines in Azure
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets Azure provides DNS name resolution by default for all virtual machines that are in a single virtual network. You can implement your own DNS name resolution solution by configuring your own DNS services on your virtual machines that Azure hosts. The following scenarios should help you choose the one that works for your situation.
The following table illustrates scenarios and corresponding name resolution solu
## Name resolution that Azure provides
-Along with resolution of public DNS names, Azure provides internal name resolution for virtual machines and role instances that are in the same virtual network. In virtual networks that are based on Azure Resource Manager, the DNS suffix is consistent across the virtual network; the FQDN is not needed. DNS names can be assigned to both network interface cards (NICs) and virtual machines. Although the name resolution that Azure provides does not require any configuration, it is not the appropriate choice for all deployment scenarios, as seen on the preceding table.
+Along with resolution of public DNS names, Azure provides internal name resolution for virtual machines and role instances that are in the same virtual network. In virtual networks that are based on Azure Resource Manager, the DNS suffix is consistent across the virtual network; the FQDN isn't needed. DNS names can be assigned to both network interface cards (NICs) and virtual machines. Although the name resolution that Azure provides does not require any configuration, it isn't the appropriate choice for all deployment scenarios, as seen on the preceding table.
### Features and considerations
Along with resolution of public DNS names, Azure provides internal name resoluti
**Considerations:**
-* The DNS suffix that Azure creates cannot be modified.
-* You cannot manually register your own records.
-* WINS and NetBIOS are not supported.
+* The DNS suffix that Azure creates can't be modified.
+* You can't manually register your own records.
+* WINS and NetBIOS aren't supported.
* Hostnames must be DNS-compatible.
- Names must use only 0-9, a-z, and '-', and they cannot start or end with a '-'. See RFC 3696 Section 2.
+ Names must use only 0-9, a-z, and '-', and they can't start or end with a '-'. See RFC 3696 Section 2.
* DNS query traffic is throttled for each virtual machine. Throttling shouldn't impact most applications. If request throttling is observed, ensure that client-side caching is enabled. For more information, see [Getting the most from name resolution that Azure provides](#getting-the-most-from-name-resolution-that-azure-provides). ### Getting the most from name resolution that Azure provides\ **Client-side caching:**
-Some DNS queries are not sent across the network. Client-side caching helps reduce latency and improve resilience to network inconsistencies by resolving recurring DNS queries from a local cache. DNS records contain a Time-To-Live (TTL), which enables the cache to store the record for as long as possible without impacting record freshness. As a result, client-side caching is suitable for most situations.
+Some DNS queries aren't sent across the network. Client-side caching helps reduce latency and improve resilience to network inconsistencies by resolving recurring DNS queries from a local cache. DNS records contain a Time-To-Live (TTL), which enables the cache to store the record for as long as possible without impacting record freshness. As a result, client-side caching is suitable for most situations.
-Some Linux distributions do not include caching by default. We recommend that you add a cache to each Linux virtual machine after you check that there isn't a local cache already.
+Some Linux distributions don't include caching by default. We recommend that you add a cache to each Linux virtual machine after you check that there isn't a local cache already.
Several different DNS caching packages, such as dnsmasq, are available. Here are the steps to install dnsmasq on the most common distributions:
sudo systemctl start dnsmasq.service
sudo netconfig update ```
-# [CentOS/RHEL](#tab/rhel)
+# [RHEL](#tab/rhel)
1. Install the dnsmasq package:
sudo cat /etc/resolv.conf
options timeout:1 attempts:5 ```
-The `/etc/resolv.conf` file is auto-generated and should not be edited. The specific steps that add the 'options' line vary by distribution:
+The `/etc/resolv.conf` file is auto-generated and shouldn't be edited. The specific steps that add the 'options' line vary by distribution:
**Ubuntu** (uses resolvconf)
The `/etc/resolv.conf` file is auto-generated and should not be edited. The spec
1. Add `timeout:1 attempts:5` to the `NETCONFIG_DNS_RESOLVER_OPTIONS=""` parameter in `/etc/sysconfig/network/config`. 2. Run `sudo netconfig update` to update.
-**CentOS by Rogue Wave Software (formerly OpenLogic)** (uses NetworkManager)
-
-1. Add `RES_OPTIONS="timeout:1 attempts:5"` to `/etc/sysconfig/network`.
-2. Run `systemctl restart NetworkManager` to update.
- ## Name resolution using your own DNS server Your name resolution needs may go beyond the features that Azure provides. For example, you might require DNS resolution between virtual networks. To cover this scenario, you can use your own DNS servers.
DNS forwarding also enables DNS resolution between virtual networks and enables
![DNS resolution between virtual networks](./media/azure-dns/inter-vnet-dns.png)
-When you use name resolution that Azure provides, the internal DNS suffix is provided to each virtual machine by using DHCP. When you use your own name resolution solution, this suffix is not supplied to virtual machines because the suffix interferes with other DNS architectures. To refer to machines by FQDN or to configure the suffix on your virtual machines, you can use PowerShell or the API to determine the suffix:
+When you use name resolution that Azure provides, the internal DNS suffix is provided to each virtual machine by using DHCP. When you use your own name resolution solution, this suffix isn't supplied to virtual machines because the suffix interferes with other DNS architectures. To refer to machines by FQDN or to configure the suffix on your virtual machines, you can use PowerShell or the API to determine the suffix:
* For virtual networks that are managed by Azure Resource Manager, the suffix is available via the [network interface card](/rest/api/virtualnetwork/networkinterfaces) resource. You can also run the `azure network public-ip show <resource group> <pip name>` command to display the details of your public IP, which includes the FQDN of the NIC. If forwarding queries to Azure doesn't suit your needs, you need to provide your own DNS solution. Your DNS solution needs to:
-* Provide appropriate hostname resolution, for example via [DDNS](../../virtual-network/virtual-networks-name-resolution-ddns.md). If you use DDNS, you might need to disable DNS record scavenging. DHCP leases of Azure are very long and scavenging may remove DNS records prematurely.
+* Provide appropriate hostname resolution, for example via [DDNS](../../virtual-network/virtual-networks-name-resolution-ddns.md). If you use DDNS, you might need to disable DNS record scavenging. DHCP leases of Azure are long and scavenging may remove DNS records prematurely.
* Provide appropriate recursive resolution to allow resolution of external domain names. * Be accessible (TCP and UDP on port 53) from the clients it serves and be able to access the Internet. * Be secured against access from the Internet to mitigate threats posed by external agents.
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
To enable Azure Hybrid Benefit when you create a virtual machine, use the follow
![Screenshot of the Azure portal that shows checkboxes selected for licensing.](./media/azure-hybrid-benefit/create-vm-ahb-checkbox.png) 1. Create a virtual machine by following the next set of instructions.
-1. On the **Configuration** pane, confirm that the option is enabled.
+1. On the **Operating System** pane, confirm that the option is enabled.
- ![Screenshot of the Azure Hybrid Benefit configuration pane after you create a virtual machine.](./media/azure-hybrid-benefit/create-configuration-blade.png)
+ ![Screenshot of the Azure Hybrid Benefit configuration pane after you create a virtual machine.](./media/azure-hybrid-benefit/azure-hybrid-benefit.png)
#### [Azure CLI](#tab/ahbNewCli)
To enable Azure Hybrid Benefit on an existing virtual machine:
1. Go to the [Azure portal](https://portal.azure.com/). 1. Open the virtual machine page on which you want to apply the conversion.
-1. Go to **Configuration** > **Licensing**. To enable the Azure Hybrid Benefit conversion, select **Yes**, and then select the confirmation checkbox.
+1. Go to **Operating System** > **Licensing**. To enable the Azure Hybrid Benefit conversion, select **Yes**, and then select the confirmation checkbox.
-![Screenshot of the Azure portal that shows the Licensing section of the configuration page for Azure Hybrid Benefit.](./media/azure-hybrid-benefit/create-configuration-blade.png)
+![Screenshot of the Azure portal that shows the Licensing section of the configuration page for Azure Hybrid Benefit.](./media/azure-hybrid-benefit/azure-hybrid-benefit.png)
#### [Azure CLI](#tab/ahbExistingCli)
virtual-machines Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-bicep.md
tags: azure-resource-manager, bicep
This quickstart shows you how to use a Bicep file to deploy an Ubuntu Linux virtual machine (VM) in Azure. ## Prerequisites
virtual-machines Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-template.md
This quickstart shows you how to use an Azure Resource Manager template (ARM template) to deploy an Ubuntu Linux virtual machine (VM) in Azure. If your environment meets the prerequisites and you're familiar with ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
virtual-machines Restore Point Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/restore-point-troubleshooting.md
This article provides troubleshooting steps that can help you resolve restore point errors related to communication with the VM agent and extension. ## Step-by-step guide to troubleshoot restore point failures
virtual-machines Copy Managed Disks To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-to-same-or-different-subscription.md
This article contains two scripts. The first script copies a managed disk that's using platform-managed keys to same or different subscription but in the same region. The second script copies a managed disk that's using customer-managed keys to the same or a different subscription in the same region. Either copy only works when the subscriptions are part of the same Microsoft Entra tenant. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Disks with platform-managed keys
virtual-machines Copy Managed Disks Vhd To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-vhd-to-storage-account.md
This script exports the underlying VHD of a managed disk to a storage account in same or different region. It first generates the SAS URI of the managed disk and then uses it to copy the VHD to a storage account. Use this script to copy managed disks to another region for regional expansion. If you want to publish the VHD file of a managed disk in Azure Marketplace, you can use this script to copy the VHD file to a storage account and then generate a SAS URI of the copied VHD to publish it in the Marketplace. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
virtual-machines Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-same-or-different-subscription.md
This article contains two scripts. The first script copies a snapshot of a manag
> [!NOTE] > Both subscriptions must be located under the same tenant [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Disks with platform-managed keys
virtual-machines Copy Snapshot To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-storage-account.md
This script exports a managed snapshot to a storage account in different region. It first generates the SAS URI of the snapshot and then uses it to copy it to a storage account in different region. Use this script to maintain backup of your managed disks in different region for disaster recovery. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
This article contains two scripts for creating a managed disk from a snapshot. The first script is for a managed disk with platform-managed keys and the second script is for a managed disk with customer-managed keys. Use these scripts to restore a virtual machine from snapshots of OS and data disks. Create OS and data managed disks from respective snapshots and then create a new virtual machine by attaching managed disks. You can also restore data disks of an existing VM by attaching data disks created from snapshots. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Disks with platform-managed keys
virtual-machines Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-vhd.md
This script creates a managed disk from a VHD file in a storage account in the same subscription. Use this script to import a specialized (not generalized/sysprepped) VHD to managed OS disk to create a virtual machine. Or, use it to import a data VHD to managed data disk. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
virtual-machines Create Vm From Managed Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-managed-os-disks.md
This script creates a virtual machine by attaching an existing managed disk as O
* Create a VM from an existing managed disk that was created from a specialized VHD file * Create a VM from an existing managed OS disk that was created from a snapshot [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
virtual-machines Create Vm From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-snapshot.md
This script creates a virtual machine from a snapshot of an OS disk. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
virtual-machines Virtual Machines Powershell Sample Copy Managed Disks Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-managed-disks-vhd.md
This script exports the VHD of a managed disk to a storage account in different
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install.md)]
virtual-machines Virtual Machines Powershell Sample Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-snapshot-to-same-or-different-subscription.md
This script copies a snapshot of a managed disk to same or different subscriptio
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install.md)]
virtual-machines Virtual Machines Powershell Sample Copy Snapshot To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-snapshot-to-storage-account.md
This script exports a managed snapshot to a storage account in different region.
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install.md)]
virtual-machines Virtual Machines Powershell Sample Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot.md
This script creates a managed disk from a snapshot. Use it to restore a virtual machine from snapshots of OS and data disks. Create OS and data managed disks from respective snapshots and then create a new virtual machine by attaching managed disks. You can also restore data disks of an existing VM by attaching data disks created from snapshots.
virtual-machines Virtual Machines Powershell Sample Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-vhd.md
This script creates a managed disk from a VHD file in a storage account in same
Don't create multiple identical managed disks from a VHD file in small amount of time. To create managed disks from a vhd file, blob snapshot of the vhd file is created and then it is used to create managed disks. Only one blob snapshot can be created in a minute that causes disk creation failures due to throttling. To avoid this throttling, create a [managed snapshot from the vhd file](virtual-machines-powershell-sample-create-snapshot-from-vhd.md?toc=%2fpowershell%2fmodule%2ftoc.json) and then use the managed snapshot to create multiple managed disks in short amount of time.
virtual-machines Virtual Machines Powershell Sample Create Snapshot From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-snapshot-from-vhd.md
This script creates a snapshot from a VHD file in a storage account in same or different subscription. Use this script to import a specialized (not generalized/sysprepped) VHD to a snapshot and then use the snapshot to create multiple identical managed disks in small amount of time. Also, use it to import a data VHD to a snapshot and then use the snapshot to create multiple managed disks in small amount of time.
virtual-machines Trusted Launch Existing Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-existing-vm.md
Make sure that you've installed the latest [Azure PowerShell](/powershell/azure/
This section steps through using an ARM template to enable Trusted launch on existing Azure Generation 2 VM. 1. Review the template.
virtual-machines Virtual Machine Scale Sets Maintenance Control Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-template.md
This article explains how you can use an Azure Resource Manager (ARM) template t
- Create the configuration - Assign the configuration to a virtual machine ## Create the configuration
virtual-machines Virtual Machines Create Restore Points Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points-powershell.md
# Create virtual machine restore points using PowerShell You can create Virtual Machine restore points using PowerShell scripts. The [Azure PowerShell Az](/powershell/azure/new-azureps-module-az) module is used to create and manage Azure resources from the command line or in scripts.
virtual-machines Disk Encryption Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-cli-quickstart.md
The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the Azure CLI to create and encrypt a Windows Server 2016 virtual machine (VM). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-machines Image Builder Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-powershell.md
Some of the steps require cmdlets from the [Az.ImageBuilder](https://www.powersh
Install-Module -Name Az.ImageBuilder ``` If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription by using the
virtual-machines Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-bicep.md
This quickstart shows you how to use a Bicep file to deploy a Windows virtual machine (VM) in Azure. ## Prerequisites
virtual-machines Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-template.md
This quickstart shows you how to use an Azure Resource Manager template to deploy a Windows virtual machine (VM) in Azure. If your environment meets the prerequisites and you're familiar with using templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
virtual-machines Tutorial Manage Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-manage-data-disk.md
This tutorial covers deployment and management of VM disks. In this tutorial, yo
You must have an Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create a VM
virtual-machines Oracle Database Quick Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-quick-create.md
This article describes how to use the Azure CLI to deploy an Azure virtual machi
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
- Azure Cloud Shell or the Azure CLI.
virtual-machines Weblogic Server Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/weblogic-server-azure-virtual-machine.md
If you're interested in providing feedback or working closely on your migration
## Prerequisites -- [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]
+- [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
## Deploy WebLogic Server with Administration Server on a VM
virtual-machines Jboss Eap Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-azure-vm.md
+
+ Title: "Quickstart: Deploy a JBoss EAP cluster on Azure Virtual Machines (VMs)"
+description: Shows you how to quickly stand up a JBoss EAP cluster on Azure Virtual Machines.
+++ Last updated : 06/19/2024+++
+# Quickstart: Deploy a JBoss EAP cluster on Azure Virtual Machines (VMs)
+
+This article shows you how to quickly deploy a JBoss Enterprise Application Platform (EAP) cluster on Azure Virtual Machines (VMs) using the Azure portal.
+
+This article uses the Azure Marketplace offer for JBoss EAP Cluster to accelerate your journey to Azure VMs. The offer automatically provisions a number of resources including Azure Red Hat Enterprise Linux (RHEL) VMs, JBoss EAP instances on each VM, Red Hat build of OpenJDK on each VM, a JBoss EAP management console, and optionally an Azure App Gateway instance. To see the offer, visit the solution [JBoss EAP Cluster on RHEL VMs](https://aka.ms/eap-vm-cluster-portal) using the Azure portal.
+
+If you prefer manual step-by-step guidance for installing Red Hat JBoss EAP Cluster on Azure VMs that doesn't use the automation enabled by the Azure Marketplace offer, see [Tutorial: Install Red Hat JBoss EAP on Azure Virtual Machines manually](/azure/developer/java/migration/migrate-jboss-eap-to-azure-vm-manually).
+
+If you're interested in providing feedback or working closely on your migration scenarios with the engineering team developing JBoss EAP on Azure solutions, fill out this short [survey on JBoss EAP migration](https://aka.ms/jboss-on-azure-survey) and include your contact information. The team of program managers, architects, and engineers will promptly get in touch with you to initiate close collaboration.
+
+## Prerequisites
+
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)]
+- Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)
+- Ensure you have the necessary Red Hat licenses. You need to have a Red Hat Account with Red Hat Subscription Management (RHSM) entitlement for JBoss EAP. This entitlement lets the Azure portal install the Red Hat tested and certified JBoss EAP version.
+ > [!NOTE]
+ > If you don't have an EAP entitlement, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). Save aside the account details, which you use as the *RHSM username* and *RHSM password* in the next section.
+- After you're registered, you can find the necessary credentials (*Pool IDs*) by using the following steps. You also use the *Pool IDs* as the *RHSM Pool ID with EAP entitlement* later in this article.
+ 1. Sign in to your [Red Hat account](https://sso.redhat.com).
+ 1. The first time you sign in, you're asked to complete your profile. Make sure you select **Personal** for **Account Type**, as shown in the following screenshot.
+
+ :::image type="content" source="media/jboss-eap-azure-vm/update-account-type-as-personal.png" alt-text="Screenshot of the Red Hat profile Update Your Account page." lightbox="media/jboss-eap-azure-vm/update-account-type-as-personal.png":::
+
+ 1. In the tab where you're signed in, open [Red Hat Developer Subscription for Individuals](https://aka.ms/red-hat-individual-dev-sub). This link takes you to all of the subscriptions in your account for the appropriate SKU.
+ 1. Select the first subscription from the **All purchased Subscriptions** table.
+ 1. Copy and save aside the value following **Master Pools** from **Pool IDs**.
+- A Java Development Kit (JDK), version 11. In this guide, we recommend the [Red Hat Build of OpenJDK](https://developers.redhat.com/products/openjdk/download). Ensure that your `JAVA_HOME` environment variable is set correctly in the shells in which you run the commands.
+- [Git](https://git-scm.com/downloads). Use `git --version` to test whether `git` works. This tutorial was tested with version 2.34.1.
+- [Maven](https://maven.apache.org/download.cgi). Use `mvn -version` to test whether `mvn` works. This tutorial was tested with version 3.8.6.
+
+> [!NOTE]
+> The Azure Marketplace offer you're going to use in this article includes support for Red Hat Satellite for license management. Using Red Hat Satellite is beyond the scope of this quickstart. For an overview on Red Hat Satellite, see [Red Hat Satellite](https://aka.ms/red-hat-satellite). To learn more about moving your Red Hat JBoss EAP and Red Hat Enterprise Linux subscriptions to Azure, see [Red Hat Cloud Access program](https://aka.ms/red-hat-cloud-access-overview).
+
+## Set up an Azure Database for PostgreSQL flexible server
+
+The steps in this section direct you to deploy an Azure Database for PostgreSQL flexible server, which you use for configuring the database connection while setting up a JBoss EAP cluster in the next section.
+
+First, use the following command to set up some environment variables.
+
+```bash
+export RG_NAME=<db-resource-group-name>
+export SERVER_NAME=<database-server-name>
+export ADMIN_PASSWORD=<postgresql-admin-password>
+```
+
+Replace the placeholders with the following values, which are used throughout the the article:
+
+- `<db-resource-group-name>`: The name of the resource group to use for the PostgreSQL flexible server - for example, `ejb040323postgresrg`.
+- `<database-server-name>`: The name of your PostgreSQL server, which should be unique across Azure - for example, `ejb040323postgresqlserver`.
+- `<postgresql-admin-password>`: The password of your PostgreSQL server. That password must be at least eight characters and at most 128 characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and nonalphanumeric characters (!, $, #, %, and so on).
+
+Next, use the following steps to create an Azure Database for PostgreSQL flexible server:
+
+1. Use the following command to create an Azure Database for PostgreSQL flexible server:
+
+ ```azurecli
+ az postgres flexible-server create \
+ --resource-group ${RG_NAME} \
+ --name ${SERVER_NAME} \
+ --database-name testdb \
+ --public-access 0.0.0.0 \
+ --admin-user testuser \
+ --admin-password ${ADMIN_PASSWORD} \
+ --yes
+ ```
+
+1. Use the following command to get the host of the PostgreSQL server:
+
+ ```azurecli
+ export DB_HOST=$(az postgres flexible-server show \
+ --resource-group ${RG_NAME} \
+ --name ${SERVER_NAME} \
+ --query "fullyQualifiedDomainName" \
+ --output tsv)
+ ```
+
+1. Use the following command to get the Java Database Connectivity (JDBC) connection URL of the PostgreSQL server:
+
+ ```azurecli
+ echo jdbc:postgresql://${DB_HOST}:5432/testdb
+ ```
+
+ Note down the output, which you use as the data source connection string of the PostgreSQL server later in this article.
+
+## Deploy a JBoss EAP cluster on Azure VMs
+
+The steps in this section direct you to deploy a JBoss EAP cluster on Azure VMs.
+
+Use the following steps to find the JBoss EAP Cluster on Azure VMs offer:
+
+1. Sign in to the Azure portal by visiting https://aka.ms/publicportal.
+1. In the search bar at the top of the Azure portal, enter *JBoss EAP*. In the search results, in the **Marketplace** section, select **JBoss EAP Cluster on VMs**.
+
+ :::image type="content" source="media/jboss-eap-azure-vm/marketplace-search-results.png" alt-text="Screenshot of the Azure portal showing JBoss EAP Server on Azure VM in the search results." lightbox="media/jboss-eap-azure-vm/marketplace-search-results.png":::
+
+1. In the drop-down menu, ensure **PAYG** is selected.
+
+Alternatively, you can also go directly to the [JBoss EAP Cluster on Azure VMs](https://aka.ms/eap-vm-cluster-portal) offer. In this case, the correct plan is already selected for you.
+
+In either case, this offer deploys a JBoss EAP cluster on Azure VMs by providing your Red Hat subscription at deployment time. The offer runs the cluster on Red Hat Enterprise Linux using a pay-as-you-go payment configuration for the base VMs.
+
+The following steps show you how to fill out the **Basics** pane shown in the following screenshot.
++
+1. On the offer page, select **Create**.
+1. On the **Basics** pane, ensure that the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section.
+1. You must deploy the offer in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, *ejb040323jbosseapcluster*.
+1. Under **Instance details**, select the region for the deployment.
+1. Leave the default VM size for **Virtual machine size**.
+1. Leave the default option **OpenJDK 17** for **JDK version**.
+1. Leave the default value **jbossuser** for **Username**.
+1. Leave the default option **Password** for **Authentication type**.
+1. Provide a password for **Password**. Use the same value for **Confirm password**.
+1. Use *3* for **Number of virtual machines to create**.
+1. Under **Optional Basic Configuration**, leave the default option **Yes** for **Accept defaults for optional configuration**.
+1. Scroll to the bottom of the **Basics** pane and notice the helpful links for **Report issues, get help, and share feedback**.
+1. Select **Next: JBoss EAP Settings**.
+
+The following steps show you how to fill out the **JBoss EAP Settings** pane shown in the following screenshot.
++
+1. Leave the default option **Managed domain** for **Use managed domain or standalone hosts to form a cluster**.
+1. Leave the default value **jbossadmin** for **JBoss EAP Admin username**.
+1. Provide a JBoss EAP password for **JBoss EAP password**. Use the same value for **Confirm password**. Save aside the value for later use.
+1. Leave the default option **No** for **Connect to an existing Red Hat Satellite Server?**.
+1. Provide your RHSM username for **RHSM username**. The value is the same one that was prepared in the prerequisites section.
+1. Provide your RHSM password for **RHSM password**. Use the same value for **Confirm password**. The value is the same one that was prepared in the prerequisites section.
+1. Provide your RHSM pool ID for **RHSM Pool ID with EAP entitlement**. The value is the same one that was prepared in the prerequisites section.
+1. Select **Next: Azure Application Gateway**.
+
+The following steps show you how to fill out the **Azure Application Gateway** pane shown in the following screenshot.
++
+1. Select **Yes** for **Connect to Azure Application Gateway?**.
+1. Select **Next: Networking**.
+
+ This pane enables you to customize the virtual network and subnet into which the JBoss EAP cluster deploys. For information about virtual networks, see [Create, change, or delete a virtual network](/azure/virtual-network/manage-virtual-network). Accept the defaults on this pane.
+
+1. Select **Next: Database**.
+
+The following steps show you how to fill out the **Database** pane shown in the following screenshot, and start the deployment.
++
+1. Select **Yes** for **Connect to database?**.
+1. Select **PostgreSQL** for **Choose database type**.
+1. Fill in *java:jboss/datasources/JavaEECafeDB* for **JNDI name**.
+1. Provide the JDBC connection URL of the PostgreSQL server, which you saved before, for **Data source connection string (jdbc:postgresql://\<host>:\<port>/\<database>)**.
+1. Fill in *testuser* for **Database username**.
+1. Provide the value for the placeholder `<postgresql-admin-password>`, which you specified before, for **Database password**. Use the same value for **Confirm password**.
+1. Select **Review + create**. Ensure that the green **Validation Passed** message appears at the top. If the message doesn't appear, fix any validation problems, then select **Review + create** again.
+1. Select **Create**.
+1. Track the progress of the deployment on the **Deployment is in progress** page.
+
+Depending on network conditions and other activity in your selected region, the deployment may take up to 35 minutes to complete. After that, you should see the text **Your deployment is complete** displayed on the deployment page.
+
+## Verify the functionality of the deployment
+
+Use the following steps to verify the functionality of the deployment for a JBoss EAP cluster on Azure VMs from the **Red Hat JBoss Enterprise Application Platform** management console:
+
+1. On the deployment page, select **Outputs**.
+1. Select the copy icon next to **adminConsole**.
+
+ :::image type="content" source="media/jboss-eap-azure-vm/rg-deployments-outputs.png" alt-text="Screenshot of the Azure portal showing the deployment outputs with the adminConsole URL highlighted." lightbox="media/jboss-eap-azure-vm/rg-deployments-outputs.png":::
+
+1. Paste the URL into an internet-connected web browser and press <kbd>Enter</kbd>. You should see the familiar **Red Hat JBoss Enterprise Application Platform** management console sign-in screen, as shown in the following screenshot.
+
+ :::image type="content" source="media/jboss-eap-azure-vm/jboss-eap-console-login.png" alt-text="Screenshot of the JBoss EAP management console sign-in screen." lightbox="media/jboss-eap-azure-vm/jboss-eap-console-login.png":::
+
+1. Fill in *jbossadmin* for **JBoss EAP Admin username** Provide the value for **JBoss EAP password** that you specified before for **Password**, then select **Sign in**.
+1. You should see the familiar **Red Hat JBoss Enterprise Application Platform** management console welcome page as shown in the following screenshot.
+
+ :::image type="content" source="media/jboss-eap-azure-vm/jboss-eap-console-welcome.png" alt-text="Screenshot of JBoss EAP management console welcome page." lightbox="media/jboss-eap-azure-vm/jboss-eap-console-welcome.png":::
+
+1. Select the **Runtime** tab. In the navigation pane, select **Topology**. You should see that the cluster contains one domain controller **master** and two worker nodes, as shown in the following screenshot:
+
+ :::image type="content" source="media/jboss-eap-azure-vm/jboss-eap-console-runtime-topology.png" alt-text="Screenshot of the JBoss EAP management console Runtime topology." lightbox="media/jboss-eap-azure-vm/jboss-eap-console-runtime-topology.png":::
+
+1. Select the **Configuration** tab. In the navigation pane, select **Profiles** > **ha** > **Datasources & Drivers** > **Datasources**. You should see that the datasource **dataSource-postgresql** is listed, as shown in the following screenshot:
+
+ :::image type="content" source="media/jboss-eap-azure-vm/jboss-eap-console-configuration-datasources.png" alt-text="Screenshot of the JBoss EAP management console Configuration tab with Datasources selected." lightbox="media/jboss-eap-azure-vm/jboss-eap-console-configuration-datasources.png":::
+
+Leave the management console open. You use it to deploy a sample app to the JBoss EAP cluster in the next section.
+
+## Deploy the app to the JBoss EAP cluster
+
+Use the following steps to deploy the Java EE Cafe sample application to the Red Hat JBoss EAP cluster:
+
+1. Use the following steps to build the Java EE Cafe sample. These steps assume that you have a local environment with Git and Maven installed:
+
+ 1. Use the following command to clone the source code from GitHub and check out the tag corresponding to this version of the article:
+
+ ```bash
+ git clone https://github.com/Azure/rhel-jboss-templates.git --branch 20230418 --single-branch
+ ```
+
+ If you see an error message with the text `You are in 'detached HEAD' state`, you can safely ignore it.
+
+ 1. Use the following command to build the source code:
+
+ ```bash
+ mvn clean install --file rhel-jboss-templates/eap-coffee-app/pom.xml
+ ```
+
+ This command creates the file *rhel-jboss-templates/eap-coffee-app/target/javaee-cafe.war*. You'll upload this file in the next step.
+
+1. Use the following steps in the **Red Hat JBoss Enterprise Application Platform** management console to upload the *javaee-cafe.war* to the **Content Repository**.
+
+ 1. From the **Deployments** tab of the Red Hat JBoss EAP management console, select **Content Repository** in the navigation panel.
+ 1. Select **Add** and then select **Upload Content**.
+
+ :::image type="content" source="media/jboss-eap-azure-vm/jboss-eap-console-upload-content.png" alt-text="Screenshot of the JBoss EAP management console Deployments tab with Upload Content menu item highlighted." lightbox="media/jboss-eap-azure-vm/jboss-eap-console-upload-content.png":::
+
+ 1. Use the browser file chooser to select the *javaee-cafe.war* file.
+ 1. Select **Next**.
+ 1. Accept the defaults on the next screen and then select **Finish**.
+ 1. Select **View content**.
+
+1. Use the following steps to deploy an application to the `main-server-group`:
+
+ 1. From **Content Repository**, select *javaee-cafe.war*.
+ 1. Open the drop-down menu and select **Deploy**.
+ 1. Select **main-server-group** as the server group for deploying *javaee-cafe.war*.
+ 1. Select **Deploy** to start the deployment. You should see a notice similar to the following screenshot:
+
+ :::image type="content" source="media/jboss-eap-azure-vm/jboss-eap-console-app-successfully-deployed.png" alt-text="Screenshot of the notice of successful deployment." lightbox="media/jboss-eap-azure-vm/jboss-eap-console-app-successfully-deployed.png":::
+
+You're now finished deploying the Java EE application. Use the following steps to access the application and validate all the settings:
+
+1. Use the following command to get the public IP address of the Azure Application Gateway. Replace the placeholder `<resource-group-name>` with the name of the resource group where the JBoss EAP cluster is deployed.
+
+ ```azurecli
+ az network public-ip show \
+ --resource-group <resource-group-name> \
+ --name gwip \
+ --query '[ipAddress]' \
+ --output tsv
+ ```
+
+1. Copy the output, which is the public IP address of the Azure Application Gateway deployed.
+1. Open an internet-connected web browser.
+1. Navigate to the application with the URL `http://<gateway-public-ip-address>/javaee-cafe`. Replace the placeholder `<gateway-public-ip-address>` with the public IP address of the Azure Application Gateway you copied previously.
+1. Try to add and remove coffees.
+
+## Clean up resources
+
+To avoid Azure charges, you should clean up unnecessary resources. When you no longer need the JBoss EAP cluster deployed on Azure VMs, unregister the JBoss EAP servers and remove the Azure resources.
+
+Run the following command to unregister the JBoss EAP servers and VMs from Red Hat subscription management. Replace the placeholder `<resource-group-name>` with the name of the resource group where the JBoss EAP cluster is deployed.
+
+```azurecli
+# Unregister domain controller
+az vm run-command invoke \
+ --resource-group <resource-group-name> \
+ --name jbosseapVm-adminVM \
+ --command-id RunShellScript \
+ --scripts "sudo subscription-manager unregister"
+
+# Unregister host controllers
+az vm run-command invoke \
+ --resource-group <resource-group-name> \
+ --name jbosseapVm1 \
+ --command-id RunShellScript \
+ --scripts "sudo subscription-manager unregister"
+az vm run-command invoke \
+ --resource-group <resource-group-name> \
+ --name jbosseapVm1 \
+ --command-id RunShellScript \
+ --scripts "sudo subscription-manager unregister"
+```
+
+Run the following commands to remove the two resource groups where the JBoss EAP cluster and the Azure Database for PostgreSQL flexible server are deployed. Replace the placeholder `<resource-group-name>` with the name of the resource group where the JBoss EAP cluster is deployed. Ensure the environment variable `$RG_NAME` is set with the name of the resource group where the PostgreSQL flexible server is deployed.
+
+```azurecli
+az group delete --name <resource-group-name> --yes --no-wait
+az group delete --name $RG_NAME --yes --no-wait
+```
+
+## Next steps
+
+Learn more about your options for deploying JBoss EAP on Azure:
+
+> [!div class="nextstepaction"]
+> [Explore JBoss EAP on Azure](/azure/developer/java/ee/jboss-on-azure)
virtual-machines Jboss Eap Single Server Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-single-server-azure-vm.md
- Title: "Quickstart: Deploy JBoss EAP on an Azure Virtual Machine (VM)"
-description: Shows you how to quickly stand up JBoss EAP Server on an Azure Virtual Machine.
--- Previously updated : 05/29/2024----
-# Quickstart: Deploy JBoss EAP on an Azure Virtual Machine (VM)
-
-This article shows you how to quickly deploy JBoss Enterprise Application Platform (EAP) on an Azure Virtual Machine (VM) using the Azure portal.
-
-This article uses the Azure Marketplace offer for JBoss EAP standalone server to accelerate your journey to Azure Virtual Machine. This solution automates most boilerplate steps to provision a single JBoss EAP instance on an Azure Virtual Machine. Once initial provisioning is complete, you are completely free to customize deployments further. The solution is jointly developed by Red Hat and Microsoft. If you prefer manual step-by-step guidance for installing Red Hat JBoss EAP Cluster on Azure VMs that doesn't utilize the automation enabled by the offer, see [Tutorial: Install Red Hat JBoss EAP on Azure Virtual Machines manually](/azure/developer/java/migration/migrate-jboss-eap-to-azure-vm-manually?toc=/azure/virtual-machines/workloads/redhat/toc.json&bc=/azure/virtual-machines/workloads/redhat/breadcrumb/toc.json)).
-
-If you're interested in providing feedback or working closely on your migration scenarios with the engineering team developing JBoss EAP on Azure solutions, fill out this short [survey on JBoss EAP migration](https://aka.ms/jboss-on-azure-survey) and include your contact information. The team of program managers, architects, and engineers will promptly get in touch with you to initiate close collaboration.
-
-## Prerequisites
--- [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)]-- Install [Azure CLI](/cli/azure/install-azure-cli).-- Install a Java Standard Edition (SE) implementation version 8 or later - for example, [Microsoft build of OpenJDK](/java/openjdk).-- Install [Maven](https://maven.apache.org/download.cgi), version 3.5.0 or higher.-- Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)-
-## Deploy JBoss EAP Server on Azure VM
-
-The steps in this section direct you to deploy JBoss EAP Server on Azure VMs.
--
-The following steps show you how to find the JBoss EAP Server on Azure VM offer and fill out the **Basics** pane:
-
-1. In the search bar at the top of the Azure portal, enter *JBoss EAP*. In the search results, in the **Marketplace** section, select **JBoss EAP standalone on RHEL VM**. In the drop-down menu, ensure that **PAYG** is selected.
-
- :::image type="content" source="media/jboss-eap-single-server-azure-vm/marketplace-search-results.png" alt-text="Screenshot of Azure portal showing JBoss EAP Server on Azure VM in search results." lightbox="media/jboss-eap-single-server-azure-vm/marketplace-search-results.png":::
-
- Alternatively, you can go directly to the [JBoss EAP standalone on RHEL VM](https://aka.ms/eap-vm-single-portal) offer. In this case, the correct plan is already selected for you.
-
- In either case, this offer deploys JBoss EAP by providing your Red Hat subscription at deployment time, and runs it on Red Hat Enterprise Linux using a pay-as-you-go payment configuration for the base VM.
-
-1. On the offer page, select **Create**.
-1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section.
-1. You must deploy the offer in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0823jbosseapvm`.
-1. Under **Instance details**, select the region for the deployment.
-1. Leave the default VM size for **Virtual machine size**.
-1. Leave the default option **OpenJDK 8** for **JDK version**.
-1. Leave the default value **jbossuser** for **Username**.
-1. Leave the default option **Password** for **Authentication type**.
-1. Fill in password for **Password**. Use the same value for **Confirm password**.
-1. Under **Optional Basic Configuration**, leave the default option **Yes** for **Accept defaults for optional configuration**.
-1. Scroll to the bottom of the **Basics** pane and notice the helpful links for **Report issues, get help, and share feedback**.
-1. Select **Next: JBoss EAP Settings**.
-
-The following steps show you how to fill out **JBoss EAP Settings** pane and start the deployment.
-
-1. Leave the default value **jbossadmin** for **JBoss EAP Admin username**.
-1. Fill in JBoss EAP password for **JBoss EAP password**. Use the same value for **Confirm password**. Write down the value for later use.
-1. Leave the default option **No** for **Connect to an existing Red Hat Satellite Server?**.
-1. Select **Review + create**. Ensure the green **Validation Passed** message appears at the top. If the message doesn't appear, fix any validation problems, then select **Review + create** again.
-1. Select **Create**.
-1. Track the progress of the deployment on the **Deployment is in progress** page.
-
-Depending on network conditions and other activity in your selected region, the deployment might take up to 6 minutes to complete. After that, you should see text **Your deployment is complete** displayed on the deployment page.
-
-## Optional: Verify the functionality of the deployment
-
-1. Open the resource group you created in the Azure portal.
-1. Select the VM resource named `jbosieapVm`.
-1. In the **Overview** pane, note the **Public IP address** assigned to the network interface.
-1. Copy the public IP address.
-1. Paste the public IP address in an Internet-connected web browser, append `:9990`, and press **Enter**. You should see the familiar **Red Hat JBoss Enterprise Application Platform** management console sign-in screen, as shown in the following screenshot:
-
- :::image type="content" source="media/jboss-eap-single-server-azure-vm/jboss-eap-console-login.png" alt-text="Screenshot of JBoss EAP management console sign-in screen." lightbox="media/jboss-eap-single-server-azure-vm/jboss-eap-console-login.png":::
-
-1. Fill in the value of **JBoss EAP Admin username** which is **jbossadmin**. Fill in the value of **JBoss EAP password** you specified before for **Password**. Select **Sign in**.
-1. You should see the familiar **Red Hat JBoss Enterprise Application Platform** management console welcome page as shown in the following screenshot.
-
- :::image type="content" source="media/jboss-eap-single-server-azure-vm/jboss-eap-console-welcome.png" alt-text="Screenshot of JBoss EAP management console welcome page." lightbox="media/jboss-eap-single-server-azure-vm/jboss-eap-console-welcome.png":::
-
-> [!NOTE]
-> You can also follow the guide [Connect to environments privately using Azure Bastion host and jumpboxes](/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/architectures/connect-to-environments-privately) and visit the **Red Hat JBoss Enterprise Application Platform** management console with the URL `http://<private-ip-address-of-vm>:9990`.
--
-## Optional: Deploy the app to the JBoss EAP Server
-
-The following steps show you how to create a "Hello World" application and then deploy it on JBoss EAP:
-
-1. Use the following steps to create a Maven project:
-
- 1. Open a terminal or command prompt.
-
- 1. Navigate to the directory where you want to create your project.
-
- 1. Run the following Maven command to create a new Java web application. Be sure to replace `<package-name>` with your desired package name and `<project-name>` with your project name.
-
- ```bash
- mvn archetype:generate -DgroupId=<package-name> -DartifactId=<project-name> -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false
- ```
-
-1. Use the following steps to update the project structure:
-
- 1. Navigate to the newly created project directory - for example, *helloworld*.
-
- The project directory has the following structure:
-
- ```
- helloworld
- Γö£ΓöÇΓöÇ src
- Γöé ΓööΓöÇΓöÇ main
- Γöé Γö£ΓöÇΓöÇ java
- Γöé ΓööΓöÇΓöÇ webapp
- Γöé ΓööΓöÇΓöÇ WEB-INF
- Γöé ΓööΓöÇΓöÇ web.xml
- ΓööΓöÇΓöÇ pom.xml
- ```
-
-1. Use the following steps to add a servlet class:
-
- 1. In the *src/main/java* directory, create a new package - for example, `com.example`.
-
- 1. Inside this package, create a new Java class named *HelloWorldServlet.java* with the following content:
-
- ```java
- package com.example;
-
- import java.io.IOException;
- import javax.servlet.ServletException;
- import javax.servlet.annotation.WebServlet;
- import javax.servlet.http.HttpServlet;
- import javax.servlet.http.HttpServletRequest;
- import javax.servlet.http.HttpServletResponse;
-
- @WebServlet("/hello")
- public class HelloWorldServlet extends HttpServlet {
- protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
- response.getWriter().print("Hello World!");
- }
- }
- ```
-
-1. Use the following steps to update the *pom.xml* file:
-
- 1. Add dependencies for Java EE APIs to your *pom.xml* file to ensure that you have the necessary libraries to compile the servlet:
-
- ```xml
- <dependencies>
- <dependency>
- <groupId>javax.servlet</groupId>
- <artifactId>javax.servlet-api</artifactId>
- <version>4.0.1</version>
- <scope>provided</scope>
- </dependency>
- </dependencies>
- ```
-
-1. Build the project by running `mvn package` in the root directory of your project. This command generates a *.war* file in the *target* directory.
-
-1. Use the following steps to deploy the application on JBoss EAP:
-
- 1. Open the JBoss EAP admin console at `http://<public-ip-address-of-ipconfig1>:9990`.
- 1. Deploy the *.war* file using the admin console by uploading the file in the **Deployments** section.
-
- :::image type="content" source="media/jboss-eap-single-server-azure-vm/jboss-eap-console-upload-content.png" alt-text="Screenshot of the JBoss EAP management console Deployments tab." lightbox="media/jboss-eap-single-server-azure-vm/jboss-eap-console-upload-content.png":::
-
-1. After deployment, access your "Hello World" application by navigating to `http://<public-ip-address-of-ipconfig1>:8080/helloworld/hello` in your web browser.
-
-## Clean up resources
-
-To avoid Azure charges, you should clean up unnecessary resources. Run the following command to remove the resource group, VM, network interface, virtual network, and all related resources.
-
-```azurecli
-az group delete --name <resource-group-name> --yes --no-wait
-```
-
-## Next steps
-
-For more information about deploying JBoss EAP on Azure, see [Red Hat JBoss EAP on Azure](/azure/developer/java/ee/jboss-on-azure?toc=/azure/virtual-machines/workloads/redhat/toc.json&bc=/azure/virtual-machines/workloads/redhat/breadcrumb/toc.json).
virtual-network-manager Concept Connectivity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-connectivity-configuration.md
Previously updated : 06/10/2024 Last updated : 06/26/2024
To assist you in understanding the topology of your network group, Azure Virtual
1. Select the **Preview Topology** tab to test out the Topology View and review your configuration's current connectivity. 1. Complete the creation of your connectivity configuration.
-> [!NOTE]
-> The Topology View is only available during the creation of your connectivity configuration in the Azure portal. Once the configuration is created, you can no longer view the topology.
+You can review the current topology of a network group by selecting **Visualization** under **Settings** in the network group's details page. The view shows the connectivity between the member virtual networks in the network group.
+ ### Use cases
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
In this quickstart, you deploy three virtual networks and use Azure Virtual Netw
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- To modify dynamic network groups, you must be [granted access via Azure RBAC role](concept-network-groups.md#network-groups-and-azure-policy) assignment only. Classic Admin/legacy authorization is not supported.
+- To modify dynamic network groups, you must be [granted access via Azure RBAC role](concept-network-groups.md#network-groups-and-azure-policy) assignment only. Classic Admin/legacy authorization isn't supported.
[!INCLUDE [virtual-network-manager-create-instance](../../includes/virtual-network-manager-create-instance.md)]
Create three virtual networks by using the portal. Each virtual network has a `n
| - | -- | | **Subscription** | Select the same subscription that you selected in step 2. | | **Resource group** | Select **rg-learn-eastus-001**. |
- | **Name** | Enter **vnet-learn-prod-eastus-002** and **vnet-learn-test-eastus-003** for each additional virtual network. |
+ | **Name** | Enter **vnet-learn-prod-eastus-002** and **vnet-learn-test-eastus-003** for the other virtual networks. |
| **Region** | Select **(US) East US**. | | **vnet-learn-prod-eastus-002 IP addresses** | IPv4 address space: **10.1.0.0/16** </br> Subnet name: **default** </br> Subnet address space: **10.1.0.0/24**| | **vnet-learn-test-eastus-003 IP addresses** | IPv4 address space: **10.2.0.0/16** </br> Subnet name: **default** </br> Subnet address space: **10.2.0.0/24**|
Create three virtual networks by using the portal. Each virtual network has a `n
Virtual Network Manager applies configurations to groups of virtual networks by placing them in network groups. To create a network group:
-1. Browse to the **rg-learn-eastus-001** resource group, and select the **vnm-learn-eastus-001** Virtual Network Manager instance.
-
-1. Under **Settings**, select **Network groups**. Then select **Create**.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of an empty list of network groups and the button for creating a network group.":::
-
-1. On the **Create a network group** pane, enter **ng-learn-prod-eastus-001** and select **Create**.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/create-network-group.png" alt-text="Screenshot of the pane for creating a network group." lightbox="./media/create-virtual-network-manager-portal/create-network-group.png":::
-
-1. Confirm that the new network group is now listed on the **Network groups** pane.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of a newly created network group on the pane that list network groups.":::
## Define membership for a connectivity configuration
After you create your network group, you add virtual networks as members. Choose
In this task, you manually add two virtual networks for your mesh configuration to your network group:
-1. From the list of network groups, select **ng-learn-prod-eastus-001**. On the **ng-learn-prod-eastus-001** pane, under **Manually add members**, select **Add virtual networks**.
+1. From the list of network groups, select **ng-learn-prod-eastus-001**. On the **ng-learn-prod-eastus-001** pane, under **Manually add members**, select **Add virtual networks**.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-static-member.png" alt-text="Screenshot of add a virtual network f.":::
In this task, you manually add two virtual networks for your mesh configuration
By using [Azure Policy](concept-azure-policy-integration.md), you define a condition to dynamically add two virtual networks to your network group when the name of the virtual network includes *prod*:
-1. From the list of network groups, select **ng-learn-prod-eastus-001**. Under **Create policy to dynamically add members**, select **Create Azure policy**.
+1. From the list of network groups, select **ng-learn-prod-eastus-001**. Under **Create policy to dynamically add members**, select **Create Azure policy**.
:::image type="content" source="media/create-virtual-network-manager-portal/define-dynamic-membership.png" alt-text="Screenshot of the button for creating an Azure policy.":::
By using [Azure Policy](concept-azure-policy-integration.md), you define a condi
| **Operator** | Select **Contains** from the dropdown list.| | **Condition** | Enter **-prod**. |
-1. The **Effective virtual networks** pane shows the virtual networks that will be added to the network group based on the conditions that you defined in Azure Policy. When you're ready, select **Close**.
+2. The **Effective virtual networks** pane shows the virtual networks for addition to the network group based on the defined conditions in Azure Policy. When you're ready, select **Close**.
:::image type="content" source="media/create-virtual-network-manager-portal/effective-virtual-networks.png" alt-text="Screenshot of the pane for effective virtual networks.":::
-1. Select **Save** to deploy the group membership. It can take up to one minute for the policy to take effect and be added to your network group.
+3. Select **Save** to deploy the group membership. It can take up to one minute for the policy to take effect and be added to your network group.
-1. On the **Network Group** pane, under **Settings**, select **Group members** to view the membership of the group based on the conditions that you defined in Azure Policy. Confirm that **Source** is listed as **azpol-learn-prod-eastus-001 - subscriptions/subscription_id**.
+4. On the **Network Group** pane, under **Settings**, select **Group members** to view the membership of the group based on the conditions that you defined in Azure Policy. Confirm that **Source** is listed as **azpol-learn-prod-eastus-001 - subscriptions/subscription_id**.
:::image type="content" source="media/create-virtual-network-manager-portal/group-members-list.png" alt-text="Screenshot of listed group members with a configured source." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
By using [Azure Policy](concept-azure-policy-integration.md), you define a condi
## Create a configuration
-Now that you've created the network group and given it the correct virtual networks, create a mesh network topology configuration. Replace `<subscription_id>` with your subscription.
+Now that you created the network group and updated its membership with virtual networks, you create a mesh network topology configuration. Replace `<subscription_id>` with your subscription.
1. Under **Settings**, select **Configurations**. Then select **Create**.
Now that you've created the network group and given it the correct virtual netwo
| **Name** | Enter **cc-learn-prod-eastus-001**. | | **Description** | *(Optional)* Provide a description about this connectivity configuration. |
-1. On the **Topology** tab, select the **Mesh** topology if it's not selected, and leave the **Enable mesh connectivity across regions** checkbox cleared. Cross-region connectivity isn't required for this setup, because all the virtual networks are in the same region. When you're ready, select **Add** > **Add network group**.
+1. On the **Topology** tab, select the **Mesh** topology, and leave the **Enable mesh connectivity across regions** checkbox cleared. Cross-region connectivity isn't required for this setup, because all the virtual networks are in the same region. When you're ready, select **Add** > **Add network group**.
:::image type="content" source="./media/create-virtual-network-manager-portal/topology-configuration.png" alt-text="Screenshot of topology selection for network group connectivity configuration.":::
To apply your configurations to your environment, you need to commit the configu
| Setting | Value | | - | -- | | **Configurations** | Select **Include connectivity configurations in your goal state**. |
- | **Connectivity configurations** | Select **cc-learn-prod-eastus-001**. |
+ | **Connectivity configurations** | Select **cc-learn-prod-eastus-001**. |
| **Target regions** | Select **East US** as the deployment region. | 1. Select **Deploy** to complete the deployment.
If you no longer need Azure Virtual Network Manager, you can remove it after you
## Next steps
-Now that you've created an Azure Virtual Network Manager instance, learn how to block network traffic by using a security admin configuration:
- > [!div class="nextstepaction"] > [Block network traffic with Azure Virtual Network Manager](how-to-block-network-traffic-portal.md)
virtual-network-manager Create Virtual Network Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-template.md
Get started with Azure Virtual Network Manager by using Azure Resource Manager t
In this quickstart, an Azure Resource Manager template is used to deploy Azure Virtual Network Manager with different connectivity topology and network group membership types. Use deployment parameters to specify the type of configuration to deploy. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
virtual-network-manager How To Create Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke.md
Previously updated : 05/07/2024 Last updated : 06/20/2024
In this article, you learn how to create a hub and spoke network topology with A
This section helps you create a network group containing the virtual networks you're using for the hub-and-spoke network topology.
-1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
-
-1. Select **Network Groups** under *Settings*, then select **+ Create**.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
-
-1. On the *Create a network group* page, enter a **Name** for the network group. This example uses the name **myNetworkGroup**. Select **Add** to create the network group.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
+> [!NOTE]
+> This how-to guide assumes you created a network manager instance using the [quickstart](create-virtual-network-manager-portal.md) guide.
-1. The *Network Groups* page lists the new network group.
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
## Define network group members
virtual-network-manager How To Create Mesh Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-mesh-network.md
In this article, you learn how to create a mesh network topology using Azure Vir
This section helps you create a network group containing the virtual networks you're using for the mesh network topology.
-1. Go to your Azure Virtual Network Manager instance. This how-to guide assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
+> [!NOTE]
+> This how-to guide assumes you created a network manager instance using the [quickstart](create-virtual-network-manager-portal.md) guide.
-1. Select **Network Groups** under *Settings*, then select **+ Create**.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
-
-1. On the *Create a network group* page, enter a **Name** for the network group. This example uses the name **myNetworkGroup**. Select **Add** to create the network group.
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
-
-1. The *Network Groups* page now lists the new network group.
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
## Define network group members Azure Virtual Network manager allows you two methods for adding membership to a network group. You can manually add virtual networks or use Azure Policy to dynamically add virtual networks based on conditions. This how-to covers [manually adding membership](concept-network-groups.md#static-membership). For information on defining group membership with Azure Policy, see [Define network group membership with Azure Policy](concept-network-groups.md#dynamic-membership).
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Previously updated : 08/01/2023 Last updated : 06/26/2024
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. Select **Review + create** and then select **Create** after validation has passed. The deployment of a virtual network gateway can take about 30 minutes. You can move on to the next section while waiting for this deployment to complete. However, you may find **gw-learn-hub-eastus-001** doesn't display that it has a gateway due to timing and sync across the Azure portal.
-## Create a dynamic network group
+## Create a network group
-1. Go to your Azure Virtual Network Manager instance. This tutorial assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide. The network group in this tutorial is called **ng-learn-prod-eastus-001**.
+> [!NOTE]
+> This how-to guide assumes you created a network manager instance using the [quickstart](create-virtual-network-manager-portal.md) guide. The network group in this tutorial is called **ng-learn-prod-eastus-001**.
-1. Select **Network groups** under *Settings*, and then select **+ Create** to create a new network group.
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
+## Define dynamic group membership with Azure policy
-1. On the **Create a network group** screen, enter the following information:
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/create-network-group.png" alt-text="Screenshot of the Basics tab on Create a network group page.":::
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **ng-learn-prod-eastus-001** for the network group name. |
- | Description | Provide a description about this network group. |
-
-1. Select **Create** to create the virtual network group.
-1. From the **Network groups** page, select the created network group from above to configure the network group.
-1. On the **Overview** page, select **Create Azure Policy** under *Create policy to dynamically add members*.
+1. From the list of network groups, select **ng-learn-prod-eastus-001**. Under **Create policy to dynamically add members**, select **Create Azure policy**.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/define-dynamic-membership.png" alt-text="Screenshot of the defined dynamic membership button.":::
virtual-network Deploy Container Networking Docker Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking-docker-linux.md
The Azure CNI plugin enables per container/pod networking for stand-alone docker
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). It can take a few minutes for the Bastion host to deploy. You can continue with the steps while the Bastion host is deploying. ## Add IP configuration
For more information about the Azure CNI plugin, see [Microsoft Azure Container
``` :::image type="content" source="./media/deploy-container-networking-docker-linux/ifconfig-output.png" alt-text="Screenshot of ifconfig output in Bash prompt of test container."::: ## Next steps
virtual-network Deploy Container Networking Docker Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking-docker-windows.md
The Azure CNI plugin enables per container/pod networking for stand-alone docker
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). It can take a few minutes for the network and Bastion host to deploy. Continue with the next steps when the deployment is complete or the virtual network creation is complete. ## Add IP configuration
The script that creates the containers with the Azure CNI plugin requires the ap
1. Exit the container and close the Bastion connection to **vm-1**. ## Next steps
virtual-network Diagnose Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-routing-problem.md
Though effective routes were viewed through the VM in the previous steps, you ca
## Diagnose using PowerShell You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to log into Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions).
virtual-network Diagnose Network Traffic Filter Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-traffic-filter-problem.md
Though effective security rules were viewed through the VM, you can also view ef
## Diagnose using PowerShell You can run the commands that follow in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account. If you run PowerShell from your computer, you need the Azure PowerShell module, version 1.0.0 or later. Run `Get-Module -ListAvailable Az` on your computer, to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to log into Azure with an account that has the [necessary permissions](virtual-network-network-interface.md#permissions)].
virtual-network How To Create Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-create-encryption-portal.md
Azure Virtual Network encryption is a feature of Azure Virtual Network. Virtual
- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). > [!IMPORTANT] > Azure Virtual Network encryption requires supported virtual machine SKUs in the virtual network for traffic to be encrypted. The setting **dropUnencrypted** will drop traffic between unsupported virtual machine SKUs if they are deployed in the virtual network. For more information, see [Azure Virtual Network encryption requirements](virtual-network-encryption-overview.md#requirements).
Use the following steps to enable encryption for a virtual network.
:::image type="content" source="./media/how-to-create-encryption-portal/virtual-network-properties-encryption-enabled.png" alt-text="Screenshot of properties of the virtual network with encryption enabled."::: ## Next steps
virtual-network How To Dhcp Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-dhcp-azure.md
Learn how to deploy a highly available DHCP server in Azure on a virtual machine
- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). ## Create internal load balancer
virtual-network Associate Public Ip Address Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/associate-public-ip-address-vm.md
In this article, you learn how to associate a public IP address to an existing v
Public IP addresses have a nominal fee. For details, see [pricing](https://azure.microsoft.com/pricing/details/ip-addresses/). There's a limit to the number of public IP addresses that you can use per subscription. For details, see [limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#publicip-address). ## Prerequisites
Install [Azure PowerShell](/powershell/azure/install-azure-powershell) on your m
1. Open the necessary ports in your security groups by adjusting the security rules in the network security groups. For information, see [Allow network traffic to the VM](#allow-network-traffic-to-the-vm).
+> [!NOTE]
+> To share a VM with an external user, you must add a public IP address to the VM. Alternatively, external users can connect to VM's private IP address through Azure Bastion.
## Allow network traffic to the VM
virtual-network Configure Public Ip Bastion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-bastion.md
An Azure Bastion host requires a public IP address for its configuration.
In this article, you learn how to create an Azure Bastion host using an existing public IP in your subscription. Azure Bastion doesn't support the change of the public IP address after creation. Azure Bastion supports assigning an IP address within an IP prefix range but not assigning the IP prefix range itself. >[!NOTE]
->[!INCLUDE [Pricing](../../../includes/bastion-pricing.md)]
+>[!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
## Prerequisites
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
The steps in this article detail the process to:
* Enable the range to be advertised by Microsoft [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-network Create Public Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-cli.md
In this quickstart, you learn how to create an Azure public IP address. Public I
:::image type="content" source="./media/create-public-ip-portal/public-ip-example-resources.png" alt-text="Diagram of an example use of a public IP address. A public IP address is assigned to a load balancer."::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-network Create Public Ip Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-cli.md
Learn about a public IP address prefix and how to create, change, and delete one
When you create a public IP address resource, you can assign a static public IP address from the prefix and associate the address to virtual machines, load balancers, or other resources. For more information, see [Public IP address prefix overview](public-ip-address-prefix.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
There are other attributes that can be used for a public IP address.
> At this time, both the **Tier** and **Routing Preference** feature are available for standard SKU IPv4 addresses only. They can't be utilized on the same IP address concurrently. > ## Limits
virtual-network Routing Preference Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-cli.md
This article shows you how to configure routing preference via ISP network (**In
By default, the routing preference for public IP address is set to the Microsoft global network for all Azure services and can be associated with any Azure service. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-network Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-powershell.md
By default, the routing preference for public IP address is set to the Microsoft
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. ## Create a resource group
virtual-network Virtual Network Deploy Static Pip Arm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-cli.md
View the public IP address assigned and confirm that it was created as a static
> [!WARNING] > Do not modify the IP address settings within the virtual machine's operating system. The operating system is unaware of Azure public IP addresses. Though you can add private IP address settings to the operating system, we recommend not doing so unless necessary, and not until after reading [Add a private IP address to an operating system](virtual-network-network-interface-addresses.md#private). ## Clean up resources
virtual-network Virtual Network Deploy Static Pip Arm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
> [!WARNING] > Do not modify the IP address settings within the virtual machine's operating system. The operating system is unaware of Azure public IP addresses. Though you can add private IP address settings to the operating system, we recommend not doing so unless necessary. For more information, see [Add a private IP address to an operating system](./virtual-network-network-interface-addresses.md#private). ## Clean up resources
virtual-network Virtual Network Deploy Static Pip Arm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-ps.md
Get-AzPublicIpAddress @ip | Select "IpAddress","PublicIpAllocationMethod" | Form
> [!WARNING] > Do not modify the IP address settings within the virtual machine's operating system. The operating system is unaware of Azure public IP addresses. Though you can add private IP address settings to the operating system, we recommend not doing so unless necessary, and not until after reading [Add a private IP address to an operating system](virtual-network-network-interface-addresses.md#private). ## Clean up resources
virtual-network Virtual Network Multiple Ip Addresses Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md
Title: Assign multiple IP addresses to VMs - Azure CLI
description: Learn how to create a virtual machine with multiple IP addresses using the Azure CLI. Previously updated : 08/24/2023 Last updated : 06/21/2024
virtual-network Virtual Network Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md
When you assign a public IP address to an Azure resource, you enable the followi
- Outbound connectivity to the Internet using a predictable IP address. ## Create a public IP address
virtual-network Virtual Networks Static Private Ip Arm Pportal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md
Use the following steps to create a VM, and its virtual network and subnet:
1. Select **Review + create**. Review the settings, and then select **Create**. ## Change private IP address to static
virtual-network Manage Subnet Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-subnet-delegation.md
Output from command is a null bracket:
## Next steps - Learn how to [manage subnets in Azure](virtual-network-manage-subnet.md).
virtual-network Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-overview.md
This table lists the methods that you can use to create an IP address.
After you create a public IP address, you can associate it with a VM by assigning it to a NIC. ## Virtual network and subnets
This table lists the methods that you can use to create a NAT gateway resource.
Azure Bastion is deployed to provide secure management connectivity to virtual machines in a virtual network. Azure Bastion Service enables you to securely and seamlessly RDP & SSH to the VMs in your virtual network. Azure bastion enables connections without exposing a public IP on the VM. Connections are made directly from the Azure portal, without the need of an extra client/agent or piece of software. Azure Bastion supports standard SKU public IP addresses.
- [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+ [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
For more information about Azure Bastion, see [What is Azure Bastion?](../bastion/bastion-overview.md).
virtual-network Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-bicep.md
A virtual network is the fundamental building block for private networks in Azur
:::image type="content" source="./media/quick-create-bicep/virtual-network-bicep-resources.png" alt-text="Diagram of resources created in the virtual network quickstart." lightbox="./media/quick-create-bicep/virtual-network-bicep-resources.png"::: ## Prerequisites
When the deployment finishes, a message indicates that the deployment succeeded.
Bastion uses your browser to connect to VMs in your virtual network over Secure Shell (SSH) or Remote Desktop Protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Bastion, see [What is Azure Bastion?](~/articles/bastion/bastion-overview.md). > [!NOTE]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
Use the [Azure Bastion as a Service](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/azure-bastion/main.bicep) Bicep template from [Azure Resource Manager Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) to deploy and configure Bastion in your virtual network. This Bicep template defines the following Azure resources:
virtual-network Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-cli.md
A virtual network is the fundamental building block for private networks in Azur
:::image type="content" source="./media/quick-create-portal/virtual-network-qs-resources.png" alt-text="Diagram of resources created in the virtual network quickstart." lightbox="./media/quick-create-portal/virtual-network-qs-resources.png"::: [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
az network vnet create \
Azure Bastion uses your browser to connect to VMs in your virtual network over Secure Shell (SSH) or Remote Desktop Protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. 1. Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a Bastion subnet for your virtual network. This subnet is reserved exclusively for Bastion resources and must be named **AzureBastionSubnet**.
The VMs take a few minutes to create. After Azure creates each VM, the Azure CLI
> [!NOTE] > VMs in a virtual network with a Bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in Bastion-hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](ip-services/remove-public-ip-address-vm.md). ## Connect to a virtual machine
virtual-network Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-portal.md
A virtual network is the fundamental building block for private networks in Azur
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. [!INCLUDE [create-two-virtual-machines.md](../../includes/create-two-virtual-machines.md)]
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
1. Close the Bastion connection to **vm-2**. ## Next steps
virtual-network Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-powershell.md
New-AzResourceGroup @rg
Azure Bastion uses your browser to connect to VMs in your virtual network over Secure Shell (SSH) or Remote Desktop Protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Bastion, see [What is Azure Bastion?](/azure/bastion/bastion-overview).
- [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+ [!INCLUDE [Pricing](~/reusable-content/ce-skilling/azure/includes/bastion-pricing.md)]
1. Configure a Bastion subnet for your virtual network. This subnet is reserved exclusively for Bastion resources and must be named **AzureBastionSubnet**.
Azure takes a few minutes to create the VMs. When Azure finishes creating the VM
> [!NOTE] > VMs in a virtual network with a Bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in Bastion-hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](ip-services/remove-public-ip-address-vm.md). ## Connect to a virtual machine
virtual-network Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-template.md
In this quickstart, you learn how to create a virtual network with two subnets b
:::image type="content" source="./media/quick-create-bicep/virtual-network-bicep-resources.png" alt-text="Diagram of resources created in the virtual network quickstart." lightbox="./media/quick-create-bicep/virtual-network-bicep-resources.png"::: You can also complete this quickstart by using the [Azure portal](quick-create-portal.md), [Azure PowerShell](quick-create-powershell.md), or the [Azure CLI](quick-create-cli.md).
virtual-network Virtual Network Cli Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-peer-two-virtual-networks.md
This script sample creates and connects two virtual networks in the same region through the Azure network. After running the script, you have a peering between two virtual networks. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] ## Sample script ### Run the script
This script sample creates and connects two virtual networks in the same region
## Clean up deployment ```azurecli az group delete --name $resourceGroup
virtual-network Virtual Network Powershell Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-powershell-sample-peer-two-virtual-networks.md
This script sample creates and connects two virtual networks in the same region
You can execute the script from the Azure [Cloud Shell](https://shell.azure.com/powershell), or from a local PowerShell installation. If you use PowerShell locally, this script requires the Az PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. ## Sample script [!code-azurepowershell-interactive[main](../../../powershell_scripts/virtual-network/peer-two-virtual-networks/peer-two-virtual-networks.ps1 "Peer two networks")]
virtual-network Tutorial Connect Virtual Networks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-cli.md
In this article, you learn how to:
* Communicate between VMs [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
The VM takes a few minutes to create. After the VM is created, the Azure CLI sho
Take note of the **publicIpAddress**. This address is used to access the VM from the internet in a later step. ## Communicate between VMs
virtual-network Tutorial Connect Virtual Networks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-portal.md
In this tutorial, you learn how to:
Sign in to the [Azure portal](https://portal.azure.com). Repeat the previous steps to create a second virtual network with the following values:
Repeat the previous steps to create a second virtual network with the following
<a name="peer-virtual-networks"></a> ## Create virtual machines Create a virtual machine in each virtual network to test the communication between them. Repeat the previous steps to create a second virtual machine in the second virtual network with the following values:
Use `ping` to test the communication between the virtual machines.
1. Close the Bastion connection to **vm-2**. ## Next steps
virtual-network Tutorial Connect Virtual Networks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-powershell.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
New-AzVm @vm2
The VM takes a few minutes to create. Don't continue with the later steps until Azure creates **vm-2** and returns output to PowerShell. ## Communicate between VMs
virtual-network Tutorial Create Route Table Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-cli.md
Azure automatically routes traffic between all subnets within a virtual network,
* Deploy virtual machines (VM) into different subnets * Route traffic from one subnet to another through an NVA [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-network Tutorial Create Route Table Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-portal.md
In this tutorial, you learn how to:
Sign in to the [Azure portal](https://portal.azure.com). ## Create subnets
Test routing of network traffic from **vm-public** to **vm-private**. Test routi
1. Close the Bastion session. ## Next steps
virtual-network Tutorial Create Route Table Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-powershell.md
Azure automatically routes traffic between all subnets within a virtual network,
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
virtual-network Tutorial Filter Network Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-cli.md
You can filter network traffic inbound to and outbound from a virtual network su
* Deploy virtual machines (VM) into a subnet * Test traffic filters [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-network Tutorial Filter Network Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-powershell.md
You can filter network traffic inbound to and outbound from a virtual network su
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
virtual-network Tutorial Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic.md
In this tutorial, you learn how to:
Sign in to the [Azure portal](https://portal.azure.com). ## Create application security groups
You see the IIS default page, because inbound traffic from the internet to the *
The network interface attached for **vm-1** is associated with the **asg-web** application security group and allows the connection. ## Next steps
virtual-network Tutorial Restrict Network Access To Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-cli.md
Virtual network service endpoints enable you to limit network access to some Azu
* Confirm access to a resource from a subnet * Confirm access is denied to a resource from a subnet and the internet [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-network Tutorial Restrict Network Access To Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-powershell.md
Virtual network service endpoints enable you to limit network access to some Azu
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
virtual-network Tutorial Restrict Network Access To Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources.md
This tutorial uses the Azure portal. You can also complete it using the [Azure C
Sign in to the [Azure portal](https://portal.azure.com). ## Enable a service endpoint
By default, all virtual machine instances in a subnet can communicate with any r
The steps required to restrict network access to resources created through Azure services, which are enabled for service endpoints vary across services. See the documentation for individual services for specific steps for each service. The rest of this tutorial includes steps to restrict network access for an Azure Storage account, as an example. ### Create a file share in the storage account
To restrict network access to a subnet:
To test network access to a storage account, deploy a virtual machine to each subnet. ### Create the second virtual machine
The virtual machine you created earlier that is assigned to the **subnet-private
>[!NOTE] > The access is denied because your computer isn't in the **subnet-private** subnet of the **vnet-1** virtual network. ## Next steps
virtual-network Virtual Network Network Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface.md
You can configure the following settings for a NIC:
> >The MAC address remains assigned to the NIC until the NIC is deleted or the private IP address assigned to the primary IP configuration of the primary NIC changes. For more information, see [Configure IP addresses for an Azure network interface](./ip-services/virtual-network-network-interface-addresses.md). ## View network interface settings
virtual-network Virtual Network Nsg Manage Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-nsg-manage-log.md
You can use the [Azure portal](#azure-portal), [Azure PowerShell](#azure-powersh
### Azure PowerShell You can run the commands that in this section in the [Azure Cloud Shell](https://shell.azure.com/powershell), or by running PowerShell from your computer. The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with your account.
virtual-network Virtual Network Service Endpoint Policies Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-cli.md
In this article, you learn how to:
* Confirm access to the allowed storage account from the subnet. * Confirm access is denied to the non-allowed storage account from the subnet. [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
virtual-network Virtual Network Service Endpoint Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-powershell.md
In this article, you learn how to:
* Confirm access to the allowed storage account from the subnet. * Confirm access is denied to the non-allowed storage account from the subnet. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
virtual-network Virtual Network Troubleshoot Cannot Delete Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-cannot-delete-vnet.md
You might receive errors when you try to delete a virtual network in Microsoft Azure. This article provides troubleshooting steps to help you resolve this problem. ## Troubleshooting guidance
virtual-network Virtual Network Troubleshoot Connectivity Problem Between Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-connectivity-problem-between-vms.md
You might experience connectivity problems between Azure virtual machines (VMs). This article provides troubleshooting steps to help you resolve this problem. ## Symptom
For more information, see [Add network interfaces to or remove from virtual mach
### Step 2: Check whether network traffic is blocked by NSG or UDR
-Use [Network Watcher IP Flow Verify](../network-watcher/network-watcher-ip-flow-verify-overview.md) and [Connection troubleshoot](../network-watcher/network-watcher-connectivity-overview.md) to determine whether there's a Network Security Group (NSG) or User-Defined Route (UDR) that is interfering with traffic flow.
+Use [Network Watcher IP Flow Verify](../network-watcher/network-watcher-ip-flow-verify-overview.md) and [Connection troubleshoot](../network-watcher/network-watcher-connectivity-overview.md) to determine whether there's a Network Security Group (NSG) or User-Defined Route (UDR) that is interfering with traffic flow. You may need to add inbound rules on both NSGs. The rules must be at the subnet level and the virtual machine's interface level.
### Step 3: Check whether network traffic is blocked by VM firewall
If you can't connect to a VM network share, the problem may be caused by unavail
Use [Network Watcher IP Flow Verify](../network-watcher/network-watcher-ip-flow-verify-overview.md) and [NSG Flow Logging](../network-watcher/network-watcher-nsg-flow-logging-overview.md) to determine whether there's an NSG or UDR that is interfering with traffic flow. You can also verify your Inter-VNet configuration [here](https://support.microsoft.com/en-us/help/4032151/configuring-and-validating-vnet-or-vpn-connections). ### Need help? Contact support.
-If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your issue resolved quickly.
+If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your issue resolved quickly.
virtual-network Virtual Network Troubleshoot Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-nva.md
# Network virtual appliance issues in Azure You may experience VM or VPN connectivity issues and errors when using a third party Network Virtual Appliance (NVA) in Microsoft Azure. This article provides basic steps to help you validate basic Azure Platform requirements for NVA configurations.
Technical support for third-party NVAs and their integration with the Azure plat
> [!NOTE] > If you have a connectivity or routing problem that involves an NVA, you should [contact the vendor of the NVA](https://mskb.pkisolutions.com/kb/2984655) directly. ## Checklist for troubleshooting with NVA vendor
virtual-network Virtual Networks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-overview.md
Azure resources communicate securely with each other in one of the following way
- **Virtual network**: You can deploy VMs and other types of Azure resources in a virtual network. Examples of resources include App Service Environments, Azure Kubernetes Service (AKS), and Azure Virtual Machine Scale Sets. To view a complete list of Azure resources that you can deploy in a virtual network, see [Deploy dedicated Azure services into virtual networks](virtual-network-for-azure-services.md).
+> [!NOTE]
+> To move a virtual machine from one virtual network to another, you must delete and recreate the virtual machine in the new virtual network. The virtual machine's disks can be retained for use in the new virtual machine.
+ - **Virtual network service endpoint**: You can extend your virtual network's private address space and the identity of your virtual network to Azure service resources over a direct connection. Examples of resources include Azure Storage accounts and Azure SQL Database. Service endpoints allow you to secure your critical Azure service resources to only a virtual network. To learn more, see [Virtual network service endpoints](virtual-network-service-endpoints-overview.md). - **Virtual network peering**: You can connect virtual networks to each other by using virtual peering. The resources in either virtual network can then communicate with each other. The virtual networks that you connect can be in the same, or different, Azure regions. To learn more, see [Virtual network peering](virtual-network-peering-overview.md).
virtual-wan Create Bgp Peering Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-portal.md
This article helps you configure an Azure Virtual WAN hub router to peer with a Network Virtual Appliance (NVA) in your virtual network using BGP Peering using the Azure portal. The virtual hub router learns routes from the NVA in a spoke VNet that is connected to a virtual WAN hub. The virtual hub router also advertises the virtual network routes to the NVA. For more information, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md). You can also create this configuration using [Azure PowerShell](create-bgp-peering-hub-powershell.md). ## Prerequisites
virtual-wan Create Bgp Peering Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-powershell.md
This article helps you configure an Azure Virtual WAN hub router to peer with a Network Virtual Appliance (NVA) in your virtual network using BGP Peering using Azure PowerShell. The virtual hub router learns routes from the NVA in a spoke VNet that is connected to a virtual WAN hub. The virtual hub router also advertises the virtual network routes to the NVA. For more information, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md). You can also create this configuration using the [Azure portal](create-bgp-peering-hub-portal.md). ## Prerequisites
virtual-wan Cross Tenant Vnet Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/cross-tenant-vnet-az-cli.md
This article helps you use Azure Virtual WAN to connect a virtual network to a virtual hub in a different tenant. This architecture is useful if you have client workloads that must be connected to be the same network but are on different tenants. For example, as shown in the following diagram, you can connect a non-Contoso virtual network (the remote tenant) to a Contoso virtual hub (the parent tenant). In this article, you learn how to:
virtual-wan Cross Tenant Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/cross-tenant-vnet.md
This article helps you use Azure Virtual WAN to connect a virtual network to a virtual hub in a different tenant. This architecture is useful if you have client workloads that must be connected to be the same network but are on different tenants. For example, as shown in the following diagram, you can connect a non-Contoso virtual network (the remote tenant) to a Contoso virtual hub (the parent tenant). In this article, you learn how to:
virtual-wan Manage Secure Access Resources Spoke P2s https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/manage-secure-access-resources-spoke-p2s.md
This article shows you how to use Virtual WAN and Azure Firewall rules and filte
The steps in this article help you create the architecture in the following diagram to allow User VPN clients to access a specific resource (VM1) in a spoke VNet connected to the virtual hub, but not other resources (VM2). Use this architecture example as a basic guideline. ## Prerequisites
virtual-wan Quickstart Any To Any Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/quickstart-any-to-any-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an any-to-any scenario where any spoke can reach another spoke. If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
The template used in this quickstart is from [Azure Quickstart Templates](https:
In this quickstart, you'll create an Azure Virtual WAN multi-hub deployment, including all gateways and VNet connections. The list of input parameters has been purposely kept at a minimum. The IP addressing scheme can be changed by modifying the variables inside of the template. The scenario is explained further in the [Any-to-any scenario](scenario-any-to-any.md) article. This template creates a fully functional Azure Virtual WAN environment with the following resources:
virtual-wan Quickstart Route Shared Services Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/quickstart-route-shared-services-vnet-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM template) to set up routes to access a shared service VNet with workloads that you want every VNet and Branch (VPN/ER/P2S) to access. Examples of these shared workloads might include virtual machines with services like domain controllers or file shares, or Azure services exposed internally through [Azure Private Endpoint](../private-link/private-endpoint-overview.md). If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
virtual-wan Virtual Wan Point To Site Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-azure-ad.md
In this article, you learn how to:
* Download and apply the User VPN client configuration * View your virtual WAN ## Before you begin
virtual-wan Virtual Wan Point To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-portal.md
In this tutorial, you learn how to:
> * View your virtual WAN > * Modify settings ## Prerequisites
virtual-wan Virtual Wan Point To Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-powershell.md
This article shows you how to use Virtual WAN to connect to your resources in Azure. In this article, you create a point-to-site User VPN connection over OpenVPN or IPsec/IKE (IKEv2) using PowerShell. This type of connection requires the native VPN client to be configured on each connecting client computer. Most of the steps in this article can be performed using Azure Cloud Shell, except for uploading certificates for certificate authentication. ## Prerequisites
virtual-wan Virtual Wan Route Table Nva Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva-portal.md
This article shows you how to steer traffic from a branch (on-premises site) connected to the Virtual WAN hub to a Spoke virtual network (VNet) via a Network Virtual Appliance (NVA).
-![Virtual WAN diagram](./media/virtual-wan-route-table-nva/vwanroute.png)
## Before you begin
virtual-wan Virtual Wan Route Table Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva.md
This article shows you how to steer traffic from a Virtual Hub to a Network Virtual Appliance.
-![Virtual WAN diagram](./media/virtual-wan-route-table-nva/vwanroute.png)
In this article you learn how to:
In this article you learn how to:
## Before you begin Verify that you have met the following criteria:
virtual-wan Vpn Over Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-over-expressroute.md
This article shows you how to use Azure Virtual WAN to establish an IPsec/IKE VP
The following diagram shows an example of VPN connectivity over ExpressRoute private peering: The diagram shows a network within the on-premises network connected to the Azure hub VPN gateway over ExpressRoute private peering. The connectivity establishment is straightforward:
vpn-gateway Create Routebased Vpn Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-routebased-vpn-gateway-cli.md
A VPN gateway is just one part of a connection architecture to help you securely
* The left side of the diagram shows the virtual network and the VPN gateway that you create by using the steps in this article. * You can later add different types of connections, as shown on the right side of the diagram. For example, you can create [site-to-site](tutorial-site-to-site-portal.md) and [point-to-site](point-to-site-about.md) connections. To view different design architectures that you can build, see [VPN gateway design](design.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
vpn-gateway Troubleshoot Vpn With Azure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md
This article helps understand the different logs available for VPN Gateway diagnostics and how to use them to effectively troubleshoot VPN gateway issues. The following logs are available* in Azure:
vpn-gateway Vpn Gateway Troubleshoot Site To Site Cannot Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md
After you configure a site-to-site VPN connection between an on-premises network and an Azure virtual network, the VPN connection suddenly stops working and can't be reconnected. This article provides troubleshooting steps to help you resolve this problem. ## Troubleshooting steps
To view the shared key for the Azure VPN connection, use one of the following me
**Azure PowerShell** For the Azure [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md):
vpn-gateway Vpn Gateway Troubleshoot Site To Site Disconnected Intermittently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-disconnected-intermittently.md
You might experience the problem that a new or existing Microsoft Azure Site-to-Site VPN connection is not stable or disconnects regularly. This article provides troubleshoot steps to help you identify and resolve the cause of the problem. ## Troubleshooting steps
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
In this tutorial, you learn how to:
:::image type="content" source="../media/application-gateway-web-application-firewall-portal/scenario-waf.png" alt-text="Diagram of the Web application firewall example." lightbox="../media/application-gateway-web-application-firewall-portal/scenario-waf.png"::: <!If you prefer, you can complete this tutorial using [Azure PowerShell](tutorial-restrict-web-traffic-powershell.md) or [Azure CLI](tutorial-restrict-web-traffic-cli.md).>
web-application-firewall Configure Waf Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/configure-waf-custom-rules.md
If you choose to install and use Azure PowerShell locally, this script requires
1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). 2. To create a connection with Azure, run `Connect-AzAccount`. ## Example script
web-application-firewall Per Site Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/per-site-policies.md
In this article, you learn how to:
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
web-application-firewall Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/quick-create-bicep.md
In this quickstart, you use Bicep to create an Azure Web Application Firewall v2 on Application Gateway. ## Prerequisites
web-application-firewall Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/quick-create-template.md
In this quickstart, you use an Azure Resource Manager template (ARM template) to create an Azure Web Application Firewall (WAF) v2 on Azure Application Gateway. If your environment meets the prerequisites and you're familiar with using ARM templates, you can select the **Deploy to Azure** button to open the template in the Azure portal.
web-application-firewall Tutorial Restrict Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md
In this article, you learn how to:
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial-restrict-web-traffic-powershell.md). [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
web-application-firewall Tutorial Restrict Web Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/tutorial-restrict-web-traffic-powershell.md
If you prefer, you can complete this article using the [Azure portal](applicatio
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you choose to install and use the PowerShell locally, this article requires the Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Login-AzAccount` to create a connection with Azure.
web-application-firewall Web Application Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-logs.md
You can monitor Web Application Firewall resources using logs. You can save performance, access, and other data or consume it from a resource for monitoring purposes. ## Diagnostic logs