Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
azure-boost | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-boost/overview.md | Title: Overview of Azure Boost description: Learn more about how Azure Boost can Learn more about how Azure Boost can improve security and performance of your virtual machines. -+ - ignite-2023 Last updated 11/07/2023 |
azure-cache-for-redis | Cache How To Premium Persistence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md | After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the b ### Will having firewall exceptions on the storage account affect persistence? -Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. This only applies to persistence in the Premium tier. +Yes. Using [firewall settings on the storage account](../storage/common/storage-network-security.md) can prevent the persistence feature from working. You can see if there are errors in persisting data by viewing the [Errors metric](monitor-cache-reference.md#azure-cache-for-redis-metrics). This metric will indicate if the cache is unable to persist data due to firewall restrictions on the storage account or other problems. ++In order to use data persistence with a storage account that has a firewall set up, use [managed identity based authentication](cache-managed-identity.md) to connect to storage. Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. This only applies to persistence in the Premium tier. ### Can I have AOF persistence enabled if I have more than one replica? |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | Azure Functions currently can be used with the following "Preview" or "Go-live" | Operating system | .NET preview version | | - | - |-| Linux | .NET 9 Preview 7<sup>1, 2</sup> | +| Linux | .NET 9 RC2<sup>1, 2</sup> | <sup>1</sup> To successfully target .NET 9, your project needs to reference the [2.x versions of the core packages](#version-2x-preview). If using Visual Studio, .NET 9 requires version 17.12 or later. |
azure-large-instances | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-large-instances/faq.md | Title: Azure Large Instances FAQ description: Provides resolutions for common issues that arise in working with Azure Large Instances for the Epic workload.-+ |
azure-large-instances | Work With Azure Large Instances In Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-large-instances/work-with-azure-large-instances-in-azure-portal.md | Title: Work with Azure Large Instances in the Azure portal description: Shows how to what you can do in the Azure portal with Azure Large Instances.-+ ms. Title: Work with Azure Large Instances in the Azure portal You can submit support requests specifically for Azure Large Instances. Support response depends on the support plan chosen by the customer. For more information, see [Support scope and responsiveness](https://azure.microsoft.com/support/plans/response/). - + |
azure-netapp-files | Manage Cool Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md | You can enable Azure NetApp Files storage with cool access during the creation o * **Enable Cool Access**: This option specifies whether the volume supports cool access. * **Coolness Period**: This option specifies the period (in days) after which infrequently accessed data blocks (cold data blocks) are moved to the Azure storage account. The default value is 31 days. The supported values are between 2 and 183 days.+ * **Cool Access Retrieval Policy**: This option specifies under which conditions data moves back to the hot tier. You can set this option to **Default**, **On-Read**, or **Never**. The following list describes the data retrieval behavior with the **Cool Access Retrieval Policy** settings: In a capacity pool enabled for cool access, you can enable an existing volume to 1. On the **Edit** window that appears, set the following options for the volume: * **Enable Cool Access**: This option specifies whether the volume supports cool access. * **Coolness Period**: This option specifies the period (in days) after which infrequently accessed data blocks (cold data blocks) are moved to the Azure storage account. The default value is 31 days. The supported values are between 2 and 183 days.+ >[!NOTE] + > The coolness period is calculated from the time of volume creation. If any existing volumes are enabled with cool access, the coolness period is applied retroactively on those volumes. This means if certain data blocks on the volumes have been infrequently accessed for the number of days specified in the coolness period, those blocks move to the cool tier once the feature is enabled. * **Cool Access Retrieval Policy**: This option specifies under which conditions data moves back to the hot tier. You can set this option to **Default**, **On-Read**, or **Never**. The following list describes the data retrieval behavior with the **Cool Access Retrieval Policy** settings: |
azure-resource-manager | Operator Safe Dereference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/operator-safe-dereference.md | A safe-dereference operator applies a member access, `.?<property>`, or element - If `a` evaluates to `null`, the result of `a.?x` or `a[?x]` is `null`. - If `a` is an object that doesn't have an `x` property, then `a.?x` is `null`.+- If `a` is an object that doesn't have an element at index `x`, then `a[?x]` is `null` - If `a` is an array whose length is less than or equal to `x`, then `a[?x]` is `null`. - If `a` is non-null and has a property named `x`, the result of `a.?x` is the same as the result of `a.x`. - If `a` is non-null and has an element at index `x`, the result of `a[?x]` is the same as the result of `a[x]` |
baremetal-infrastructure | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/faq.md | Title: FAQ description: Questions frequently asked about NC2 on Azure-+ Last updated 08/15/2024 |
baremetal-infrastructure | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md | Title: Getting started description: Learn how to sign up, set up, and use Nutanix Cloud Clusters on Azure.-+ Last updated 8/15/2024 |
communication-services | Whatsapp Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advanced-messaging/whatsapp/whatsapp-overview.md | +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> ## Advanced Messaging for WhatsApp features |
communication-services | Email Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-overview.md | Azure Communication Services email enables rich collaboration in communication m With Azure Communication Services, you can speed up your market entry with scalable and reliable email features by using your own SMTP domains. As with other communication channels, when you use Azure Communication Services to send email, you pay for only what you use. +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> ## Key principles |
communication-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md | +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> + ## External links and docs For each area, we have external pages to track and review our SDKs. You can consult the table below to find the matching page for your SDK of interest. |
communication-services | Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/concepts.md | +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> ## SMS features Key features of Azure Communication Services SMS SDKs include: ## Sender types supported -Sending SMS to any recipient requires getting a phone number. Choosing the right number type is critical to the success of your messaging campaign. Some factors to consider when choosing a number type include destination(s) of the message, throughput requirement of your messaging campaign, and the timeline when you want to start sending messages. Azure Communication Services enables you to send SMS using a variety of sender types - toll-free number (1-8XX), short codes (12345), and alphanumeric sender ID (CONTOSO). The following table walks you through the features of each number type: +Sending SMS to any recipient requires getting a phone number. Choosing the right number type is critical to the success of your messaging campaign. Some factors to consider when choosing a number type include destination(s) of the message, throughput requirement of your messaging campaign, and the timeline when you want to start sending messages. Azure Communication Services enables you to send SMS using various sender types - toll-free number (1-8XX), short codes (12345), and alphanumeric sender ID (CONTOSO). The following table walks you through the features of each number type: |Factors | Toll-Free| Short Code | Dynamic Alphanumeric Sender ID| Preregistered Alphanumeric Sender ID| ||-||--|--|--|-|**Description**|Toll free numbers are telephone numbers with distinct three-digit codes that can be used for business to consumer communication without any charge to the consumer| Short codes are 5-6 digit numbers used for business to consumer messaging such as alerts, notifications, and marketing | Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for a variety of use cases like one-time passcodes, marketing alerts, and flight status notifications. Dynamic alphanumeric sender ID is supported in countries that do not require registration for use.| Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for a variety of use cases like one-time passcodes, marketing alerts, and flight status notifications. Pre-registered alphanumeric sender ID is supported in countries that require registration for use. | +|**Description**|Toll free numbers are telephone numbers with distinct three-digit codes that can be used for business to consumer communication without any charge to the consumer| Short codes are 5-6 digit numbers used for business to consumer messaging such as alerts, notifications, and marketing | Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for various use cases like one-time passcodes, marketing alerts, and flight status notifications. Dynamic alphanumeric sender ID is supported in countries that do not require registration for use.| Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for a variety of use cases like one-time passcodes, marketing alerts, and flight status notifications. Pre-registered alphanumeric sender ID is supported in countries that require registration for use. | |**Format**|+1 (8XX) XYZ PQRS| 12345 | CONTOSO* |CONTOSO* | |**SMS support**|Two-way SMS| Two-way SMS | One-way outbound SMS |One-way outbound SMS | |**Calling support**|Yes| No | No |No | |
communication-services | Teams Interop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md | Azure Communication Services can be used to build custom applications and experi > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWGTqQ] +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> ## User identity models |
communication-services | Telephony Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/telephony-concept.md | +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> ## Telephony overview Whenever your users interact with a traditional telephone number, the Public Switched Telephone Network (PSTN) voice calling handles the call. To make and receive PSTN calls, you need to add telephony capabilities to your Azure Communication Services resource. In this case, signaling and media use a combination of IP-based and PSTN-based technologies to connect your users. Communication Services provides two discrete ways to reach the PSTN network: Voice Calling (PSTN) and Azure direct routing. |
communication-services | Calling Sdk Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md | +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> To build your own user experience with the Calling SDK, check out [Calling quickstarts](../../quickstarts/voice-video-calling/getting-started-with-calling.md) or [Calling hero sample](../../samples/calling-hero-sample.md). |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md | +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> Azure Communication Services offers multichannel communication APIs for adding voice, video, chat, text messaging/SMS, email, and more to all your applications. |
communication-services | Send Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md | zone_pivot_groups: acs-azcli-js-csharp-java-python-portal-nocode # Quickstart: How to send an email using Azure Communication Services +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> This quickstart describes how to send email using our Email SDKs. |
communication-services | Send | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md | zone_pivot_groups: acs-azcli-js-csharp-java-python-logic-apps > [!IMPORTANT] > SMS capabilities depend on the phone number you use and the country/region that you're operating within as determined by your Azure billing address. For more information, see [Subscription eligibility](../../concepts/numbers/sub-eligibility-number-capability.md). +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> <br/> If you want to clean up and remove a Communication Services subscription, you ca ## Toll-free verification -To utilize a new toll-free number for sending SMS messages, it is mandatory to undergo a toll-free verification process. For guidance on how to complete the verification of your toll-free number, please refer to the [Quickstart for submitting a toll-free verification](./apply-for-toll-free-verification.md). Note that only toll-free numbers that have been fully verified are authorized to send out SMS traffic. Any SMS traffic from unverified toll-free numbers directed to US and CA phone numbers will be blocked. +To use a new toll-free number for sending SMS messages, you must complete a toll-free verification process. For guidance on how to complete the verification of your toll-free number, see the [Quickstart for submitting a toll-free verification](./apply-for-toll-free-verification.md). Only fully verified toll-free numbers are authorized to send out SMS traffic. Any SMS traffic from unverified toll-free numbers directed to US and CA phone numbers are blocked. ## Next steps |
communication-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md | +<!-- [!INCLUDE [Survey Request](./includes/survey-request.md)] --> ## May 2024 |
confidential-computing | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview.md | Microsoft is one of the founding members of the CCC and provides Trusted Executi :::image type="content" source="media/overview/three-states-and-confidential-computing-consortium-definition.png" alt-text="Diagram of three states of data protection, with confidential computing's data in use highlighted."::: -Azure already encrypts data at rest and in transit. Confidential computing helps protect data in use, including cryptographic keys. Azure confidential computing helps customers prevent unauthorized access to data in use, including from the cloud operator, by processing data in a hardware-based and attested Trusted Execution Environment (TEE). When Azure confidential computing is enabled and properly configured, Microsoft is not able to access unencrypted customer data. +Azure already encrypts data at rest and in transit. Confidential computing helps protect data in use, including cryptographic keys. Azure confidential computing helps customers prevent unauthorized access to data in use, including from the cloud operator, by processing data in a hardware-based and attested Trusted Execution Environment (TEE). When Azure confidential computing is enabled and properly configured, Microsoft isn't able to access unencrypted customer data. The threat model aims to reduce trust or remove the ability for a cloud provider operator or other actors in the tenant's domain accessing code and data while it's being executed. This is achieved in Azure using a hardware root of trust not controlled by the cloud provider, which is designed to ensure unauthorized access or modification of the environment. |
confidential-computing | Virtual Machine Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions.md | - Title: For Deletion -description: For Deletion ------- Previously updated : 11/15/2023---# For Deletion |
cost-management-billing | Azure Openai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/azure-openai.md | For example, assume that your total consumption of provisioned throughput units ## Buy a Microsoft Azure OpenAI reservation +When you by a reservation, the current UTC date and time are used to record the transaction. + To buy an Azure OpenAI reservation, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com/). |
cost-management-billing | Prepare Buy Reservation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md | Resources that run in a subscription with other offer types don't receive the re ## Purchase reservations -When you by a reservation, the current UTC date and time is used to record the transaction. +When you by a reservation, the current UTC date and time are used to record the transaction. You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the following articles that apply to you when you're ready to make a reservation purchase: |
data-factory | Concepts Pipelines Activities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipelines-activities.md | Data transformation activity | Compute environment [Databricks Notebook](transform-data-databricks-notebook.md) | Azure Databricks [Databricks Jar Activity](transform-data-databricks-jar.md) | Azure Databricks [Databricks Python Activity](transform-data-databricks-python.md) | Azure Databricks+[Synapse Notebook Activity](../synapse-analytics/synapse-notebook-activity.md) | Azure Synapse Analytics ## Control flow activities |
data-factory | Connector Deprecation Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md | This article describes future deprecations for some connectors of Azure Data Fac | Connector|Upgrade Guidance|Release stage |End of Support Date |Disabled Date | |:-- |:-- |:-- |:-- | :-- | -| [Google BigQuery (legacy)](connector-google-bigquery-legacy.md)  | [Link](connector-google-bigquery.md#upgrade-the-google-bigquery-linked-service) |End of support announced and new version available | October 31, 2024 | January 10, 2025 | -| [MariaDB (legacy driver version)](connector-mariadb.md)  | [Link](connector-mariadb.md#upgrade-the-mariadb-driver-version) | End of support announced and new version available | October 31, 2024 | January 10, 2025 | -| [MySQL (legacy driver version)](connector-mysql.md)  | [Link](connector-mysql.md#upgrade-the-mysql-driver-version) | End of support announced and new version available | October 31, 2024| January 10, 2025| -| [Salesforce (legacy)](connector-salesforce-legacy.md)   | [Link](connector-salesforce.md#upgrade-the-salesforce-linked-service) | End of support announced and new version available | October 11, 2024 | January 10, 2025| -| [Salesforce Service Cloud (legacy)](connector-salesforce-service-cloud-legacy.md)   | [Link](connector-salesforce-service-cloud.md#upgrade-the-salesforce-service-cloud-linked-service) | End of support announced and new version available | October 11, 2024 |January 10, 2025 | -| [PostgreSQL (legacy)](connector-postgresql-legacy.md)   | [Link](connector-postgresql.md#upgrade-the-postgresql-linked-service)| End of support announced and new version available |October 31, 2024 | January 10, 2025 | -| [ServiceNow (legacy)](connector-servicenow-legacy.md)   | [Link](connector-servicenow.md#upgrade-your-servicenow-linked-service) | End of support announced and new version available | December 31, 2024 | March 1, 2025 | -| [Snowflake (legacy)](connector-snowflake-legacy.md)   | [Link](connector-snowflake.md#upgrade-the-snowflake-linked-service) | End of support announced and new version available | October 31, 2024 | January 10, 2025 | +| [Google BigQuery (legacy)](connector-google-bigquery-legacy.md)  | [Link](connector-google-bigquery.md#upgrade-the-google-bigquery-linked-service) |End of support announced and new version available | October 31, 2024 | / | +| [MariaDB (legacy driver version)](connector-mariadb.md)  | [Link](connector-mariadb.md#upgrade-the-mariadb-driver-version) | End of support announced and new version available | October 31, 2024 | /| +| [MySQL (legacy driver version)](connector-mysql.md)  | [Link](connector-mysql.md#upgrade-the-mysql-driver-version) | End of support announced and new version available | October 31, 2024| /| +| [Salesforce (legacy)](connector-salesforce-legacy.md)   | [Link](connector-salesforce.md#upgrade-the-salesforce-linked-service) | End of support announced and new version available | October 11, 2024 | /| +| [Salesforce Service Cloud (legacy)](connector-salesforce-service-cloud-legacy.md)   | [Link](connector-salesforce-service-cloud.md#upgrade-the-salesforce-service-cloud-linked-service) | End of support announced and new version available | October 11, 2024 |/ | +| [PostgreSQL (legacy)](connector-postgresql-legacy.md)   | [Link](connector-postgresql.md#upgrade-the-postgresql-linked-service)| End of support announced and new version available |October 31, 2024 | / | +| [ServiceNow (legacy)](connector-servicenow-legacy.md)   | [Link](connector-servicenow.md#upgrade-your-servicenow-linked-service) | End of support announced and new version available | December 31, 2024 | / | +| [Snowflake (legacy)](connector-snowflake-legacy.md)   | [Link](connector-snowflake.md#upgrade-the-snowflake-linked-service) | End of support announced and new version available | October 31, 2024 | / | | [Azure Database for MariaDB](connector-azure-database-for-mariadb.md) |/ | End of support announced |December 31, 2024 | December 31, 2024 | | [Concur (Preview)](connector-concur.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 | | [Couchbase (Preview)](connector-couchbase.md) |/ | End of support announced | December 31, 2024 | December 31, 2024 | This section describes the different release stages and support for each stage. | Release stage |Notes | |:-- |:-- | | End of Support announcement | Before the end of the lifecycle at any stage, an end of support announcement is performed.<br><br>Support Service Level Agreements (SLAs) are applicable for End of Support announced connectors, but all customers must upgrade to a new version of the connector no later than the End of Support date.<br><br>During this stage, the existing connectors function as expected, but objects such as linked service can be created only on the new version of the connector. | -| End of Support | At this stage, the connector is considered as deprecated, and no longer supported.<br> • No plan to fix bugs. <br> • No plan to add any new features. <br><br> If necessary due to outstanding security issues, or other factors, **Microsoft might expedite moving into the final disabled stage at any time, at Microsoft's discretion**.| +| End of Support | At this stage, the connector is considered as deprecated, and no longer supported. Your pipeline will not fail due to the deprecation but with below cautions:<br> • No plan to fix bugs. <br> • No plan to add any new features. <br><br> If necessary due to outstanding security issues, or other factors, **Microsoft might expedite moving into the final disabled stage at any time, at Microsoft's discretion**.| |Disabled |All pipelines that are running on legacy version connectors will no longer be able to execute.| ## Legacy connectors with updated connectors or drivers available now |
data-factory | Continuous Integration Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md | -Continuous integration is the practice of testing each change made to your codebase automatically and as early as possible. Continuous delivery follows the testing that happens during continuous integration and pushes changes to a staging or production system. +Continuous integration is the practice of testing each change made to your codebase automatically and as early as possible. Continuous delivery follows the testing that happens during continuous integration and pushes changes to a staging or production system. In Azure Data Factory, continuous integration and delivery (CI/CD) means moving Data Factory pipelines from one environment (development, test, production) to another. Azure Data Factory utilizes [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to store the configuration of your various ADF entities (pipelines, datasets, data flows, and so on). There are two suggested methods to promote a data factory to another environment: -- Automated deployment using Data Factory's integration with [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines)-- Manually upload a Resource Manager template using Data Factory UX integration with Azure Resource Manager.+- Automated deployment using Data Factory's integration with [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) +- Manually upload a Resource Manager template using Data Factory UX integration with Azure Resource Manager. [!INCLUDE [updated-for-az](~/reusable-content/ce-skilling/azure/includes/updated-for-az.md)] In Azure Data Factory, continuous integration and delivery (CI/CD) means moving Below is a sample overview of the CI/CD lifecycle in an Azure data factory that's configured with Azure Repos Git. For more information on how to configure a Git repository, see [Source control in Azure Data Factory](source-control.md). -1. A development data factory is created and configured with Azure Repos Git. All developers should have permission to author Data Factory resources like pipelines and datasets. +1. A development data factory is created and configured with Azure Repos Git. All developers should have permission to author Data Factory resources like pipelines and datasets. -1. A developer [creates a feature branch](source-control.md#creating-feature-branches) to make a change. They debug their pipeline runs with their most recent changes. For more information on how to debug a pipeline run, see [Iterative development and debugging with Azure Data Factory](iterative-development-debugging.md). +1. A developer [creates a feature branch](source-control.md#creating-feature-branches) to make a change. They debug their pipeline runs with their most recent changes. For more information on how to debug a pipeline run, see [Iterative development and debugging with Azure Data Factory](iterative-development-debugging.md). -1. After a developer is satisfied with their changes, they create a pull request from their feature branch to the main or collaboration branch to get their changes reviewed by peers. +1. After a developer is satisfied with their changes, they create a pull request from their feature branch to the main or collaboration branch to get their changes reviewed by peers. -1. After a pull request is approved and changes are merged in the main branch, the changes get published to the development factory. +1. After a pull request is approved and changes are merged in the main branch, the changes get published to the development factory. -1. When the team is ready to deploy the changes to a test or UAT (User Acceptance Testing) factory, the team goes to their Azure Pipelines release and deploys the desired version of the development factory to UAT. This deployment takes place as part of an Azure Pipelines task and uses Resource Manager template parameters to apply the appropriate configuration. +1. When the team is ready to deploy the changes to a test or UAT (User Acceptance Testing) factory, the team goes to their Azure Pipelines release and deploys the desired version of the development factory to UAT. This deployment takes place as part of an Azure Pipelines task and uses Resource Manager template parameters to apply the appropriate configuration. -1. After the changes have been verified in the test factory, deploy to the production factory by using the next task of the pipelines release. +1. After the changes have been verified in the test factory, deploy to the production factory by using the next task of the pipelines release. > [!NOTE] > Only the development factory is associated with a git repository. The test and production factories shouldn't have a git repository associated with them and should only be updated via an Azure DevOps pipeline or via a Resource Management template. The below image highlights the different steps of this lifecycle. If you're using Git integration with your data factory and have a CI/CD pipeline that moves your changes from development into test and then to production, we recommend these best practices: -- **Git integration**. Configure only your development data factory with Git integration. Changes to test and production are deployed via CI/CD and don't need Git integration.+- **Git integration**. Configure only your development data factory with Git integration. Changes to test and production are deployed via CI/CD and don't need Git integration. - **Pre- and post-deployment script**. Before the Resource Manager deployment step in CI/CD, you need to complete certain tasks, like stopping and restarting triggers and performing cleanup. We recommend that you use PowerShell scripts before and after the deployment task. For more information, see [Update active triggers](continuous-integration-delivery-automate-azure-pipelines.md#updating-active-triggers). The data factory team has [provided a script](continuous-integration-delivery-sample-script.md) to use located at the bottom of this page. If you're using Git integration with your data factory and have a CI/CD pipeline >[!WARNING] >If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands. -- **Integration runtimes and sharing**. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name, type and sub-type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type.+- **Integration runtimes and sharing**. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name, type and sub-type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type. - >[!Note] - >The integration runtime sharing is only available for self-hosted integration runtimes. Azure-SSIS integration runtimes don't support sharing. + >[!Note] + >The integration runtime sharing is only available for self-hosted integration runtimes. Azure-SSIS integration runtimes don't support sharing. -- **Managed private endpoint deployment**. If a private endpoint already exists in a factory and you try to deploy an ARM template that contains a private endpoint with the same name but with modified properties, the deployment will fail. In other words, you can successfully deploy a private endpoint as long as it has the same properties as the one that already exists in the factory. If any property is different between environments, you can override it by parameterizing that property and providing the respective value during deployment.+- **Managed private endpoint deployment**. If a private endpoint already exists in a factory and you try to deploy an ARM template that contains a private endpoint with the same name but with modified properties, the deployment will fail. In other words, you can successfully deploy a private endpoint as long as it has the same properties as the one that already exists in the factory. If any property is different between environments, you can override it by parameterizing that property and providing the respective value during deployment. -- **Key Vault**. When you use linked services whose connection information is stored in Azure Key Vault, it is recommended to keep separate key vaults for different environments. You can also configure separate permission levels for each key vault. For example, you might not want your team members to have permissions to production secrets. If you follow this approach, we recommend that you to keep the same secret names across all stages. If you keep the same secret names, you don't need to parameterize each connection string across CI/CD environments because the only thing that changes is the key vault name, which is a separate parameter.+- **Key Vault**. When you use linked services whose connection information is stored in Azure Key Vault, it is recommended to keep separate key vaults for different environments. You can also configure separate permission levels for each key vault. For example, you might not want your team members to have permissions to production secrets. If you follow this approach, we recommend that you to keep the same secret names across all stages. If you keep the same secret names, you don't need to parameterize each connection string across CI/CD environments because the only thing that changes is the key vault name, which is a separate parameter. -- **Resource naming**. Due to ARM template constraints, issues in deployment may arise if your resources contain spaces in the name. The Azure Data Factory team recommends using '_' or '-' characters instead of spaces for resources. For example, 'Pipeline_1' would be a preferable name over 'Pipeline 1'.- -- **Altering repository**. ADF manages GIT repository content automatically. Altering or adding manually unrelated files or folder into anywhere in ADF Git repository data folder could cause resource loading errors. For example, presence of *.bak* files can cause ADF CI/CD error, so they should be removed for ADF to load. +- **Resource naming**. Due to ARM template constraints, issues in deployment may arise if your resources contain spaces in the name. The Azure Data Factory team recommends using '_' or '-' characters instead of spaces for resources. For example, 'Pipeline_1' would be a preferable name over 'Pipeline 1'. ++- **Altering repository**. ADF manages GIT repository content automatically. Altering or adding manually unrelated files or folder into anywhere in ADF Git repository data folder could cause resource loading errors. For example, presence of *.bak* files can cause ADF CI/CD error, so they should be removed for ADF to load. - **Exposure control and feature flags**. When working in a team, there are instances where you may merge changes, but don't want them to be run in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends [the DevOps concept of using feature flags](/devops/operate/progressive-experimentation-feature-flags). In ADF, you can combine [global parameters](author-global-parameters.md) and the [if condition activity](control-flow-if-condition-activity.md) to hide sets of logic based upon these environment flags. - To learn how to set up a feature flag, see the below video tutorial: + To learn how to set up a feature flag, see the below video tutorial: ->[!VIDEO https://www.microsoft.com/videoplayer/embed/RE4IxdW] + >[!VIDEO https://www.microsoft.com/videoplayer/embed/RE4IxdW] ## Unsupported features - By design, Data Factory doesn't allow cherry-picking of commits or selective publishing of resources. Publishes will include all changes made in the data factory. - - Data factory entities depend on each other. For example, triggers depend on pipelines, and pipelines depend on datasets and other pipelines. Selective publishing of a subset of resources could lead to unexpected behaviors and errors. - - On rare occasions when you need selective publishing, consider using a hotfix. For more information, see [Hotfix production environment](continuous-integration-delivery-hotfix-environment.md). + - Data factory entities depend on each other. For example, triggers depend on pipelines, and pipelines depend on datasets and other pipelines. Selective publishing of a subset of resources could lead to unexpected behaviors and errors. + - On rare occasions when you need selective publishing, consider using a hotfix. For more information, see [Hotfix production environment](continuous-integration-delivery-hotfix-environment.md). - The Azure Data Factory team doesn’t recommend assigning Azure RBAC controls to individual entities (pipelines, datasets, etc.) in a data factory. For example, if a developer has access to a pipeline or a dataset, they should be able to access all pipelines or datasets in the data factory. If you feel that you need to implement many Azure roles within a data factory, look at deploying a second data factory. -- You can't publish from private branches.+- You can't publish from private branches. -- You can't currently host projects on Bitbucket.+- You can't currently host projects on Bitbucket. -- You can't currently export and import alerts and matrices as parameters. +- You can't currently export and import alerts and matrices as parameters. -- In the code repository under the *adf_publish* branch, a folder named 'PartialArmTemplates' is currently added beside the 'linkedTemplates' folder, 'ARMTemplateForFactory.json' and 'ARMTemplateParametersForFactory.json' files as part of publishing with source control. +- Partial ARM templates in your publish branch are no longer supported as of November 1, 2021. If your project utilized this feature, please switch to a supported mechanism for deployments, using: ```ARMTemplateForFactory.json``` or ```linkedTemplates``` files. :::image type="content" source="media/continuous-integration-delivery/partial-arm-templates-folder.png" alt-text="Diagram of 'PartialArmTemplates' folder."::: - We will no longer be publishing 'PartialArmTemplates' to the *adf_publish* branch starting 1-November 2021. -- **No action is required unless you are using 'PartialArmTemplates'. Otherwise, switch to any supported mechanism for deployments using: 'ARMTemplateForFactory.json' or 'linkedTemplates' files.** - ## Related content - [Continuous deployment improvements](continuous-integration-delivery-improvements.md#continuous-deployment-improvements) |
data-factory | Source Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md | For repositories owned by GitHub organization account, the admin has to authoriz :::image type="content" source="media/author-visually/github-configure-repository-pane.png" alt-text=" Screenshot showing GitHub Configure a repository pane."::: +> [!NOTE] +> If you encounter the error ***Failed to list GitHub repositories. Please make sure the account name is correct and you have permission to perform the action.***, +> ensure you're using the correct owner name, and not the GitHub repository URL. For example, if the repository URL is **https://github.com/contoso/contoso-ads**, the owner is **contoso**, not the full URL. + :::image type="content" source="media/author-visually/use-github-enterprise-server-pane.png" alt-text="Screenshot showing GitHub Configure a repository using enterprise server pane."::: :::image type="content" source="media/author-visually/github-integration-image2.png" alt-text="GitHub repository settings"::: Below are some examples of situations that can cause a stale publish branch: - A user has multiple branches. In one feature branch, they deleted a linked service that isn't AKV associated (non-AKV linked services are published immediately regardless if they are in Git or not) and never merged the feature branch into the collaboration branch. - A user modified the data factory using the SDK or PowerShell - A user moved all resources to a new branch and tried to publish for the first time. Linked services should be created manually when importing resources.-- A user uploads a non-AKV linked service or an Integration Runtime JSON manually. They reference that resource from another resource such as a dataset, linked service, or pipeline. A non-AKV linked service created through the user interface is published immediately because the credentials need to be encrypted. If you upload a dataset referencing that linked service and try to publish, the user interface allow it because it exists in the git environment. It will be rejected at publish time since it does not exist in the data factory service.+- A user uploads a non-AKV linked service or an Integration Runtime JSON manually. They reference that resource from another resource such as a dataset, linked service, or pipeline. A non-AKV linked service created through the user interface is published immediately because the credentials need to be encrypted. If you upload a dataset referencing that linked service and try to publish, the user interface allows it because it exists in the git environment. It will be rejected at publish time since it does not exist in the data factory service. If the publish branch is out of sync with the main branch and contains out-of-date resources despite a recent publish, you can use either of the below solutions: |
data-factory | Tutorial Pipeline Return Value | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-return-value.md | Value of object type is defined as follows: ## Retrieving Value in Calling Pipeline The pipeline return value of the child pipeline becomes the activity output of the Execute Pipeline Activity. You can retrieve the information with _@activity('Execute Pipeline1').output.pipelineReturnValue.keyName_. The use case is limitless. For instance, you can use-* An _int_ value from child pipeline to define the wait period for a [wait activity](control-flow-wait-activity.md) -* A _string_ value to define the URL for the [Web activity](control-flow-web-activity.md) ++* An _int_ value from child pipeline to define the wait period for a [wait activity](control-flow-wait-activity.md). +* A _string_ value to define the URL for the [Web activity](control-flow-web-activity.md). * An _expression_ value payload for a [script activity](transform-data-using-script.md) for logging purposes. :::image type="content" source="media/pipeline-return-value/pipeline-return-value-03-calling-pipeline.png" alt-text="Screenshot shows the calling pipeline."::: There are two noticeable callouts in referencing the pipeline return values. -1. With _Object_ type, you can further expand into the nested json object, such as _@activity('Execute Pipeline1').output.pipelineReturnValue.keyName.nextLevelKey_ -2. With _Array_ type, you can specify the index in the list, with _@activity('Execute Pipeline1').output.pipelineReturnValue.keyName[0]_. The number is zero indexed, meaning that it starts with 0. +1. With _Object_ type, you can further expand into the nested json object, such as _@activity('Execute Pipeline1').output.pipelineReturnValue.keyName.nextLevelKey_ +1. With _Array_ type, you can specify the index in the list, with _@activity('Execute Pipeline1').output.pipelineReturnValue.keyName[0]_. The number is zero indexed, meaning that it starts with 0. > [!NOTE] > Please make sure that the _keyName_ you are referencing exists in your child pipeline. The ADF expression builder can _not_ confirm the referential check for you. There are two noticeable callouts in referencing the pipeline return values. ## Special Considerations -While you can include multiple Set Pipeline Return Value activities in a pipeline, it's important to ensure that only one of them is executed in the pipeline. ---To avoid the previously described missing key problem when the calling pipeline, we encourage you to have the same list of keys for all branches in child pipeline. Consider using _null_ types for keys that don't have values, in a specific branch. +* While you can include multiple Set Pipeline Return Value activities in a pipeline, it is important to ensure that only one of them is executed in the pipeline. ++ :::image type="content" source="media/pipeline-return-value/pipeline-return-value-04-multiple.png" alt-text="Screenshot with Pipeline Return Value and Branching."::: ++ To avoid the previously described missing key problem when the calling pipeline, we encourage you to have the same list of keys for all branches in child pipeline. Consider using _null_ types for keys that don't have values, in a specific branch. ++* The Azure Data Factory expression language does not directly support inline JSON objects. Instead, it's necessary to concatenate strings and expressions properly. ++ For example, for the following JSON expression: + + ```json + { + "datetime": "@{utcnow()}", + "date": "@{substring(utcnow(),0,10)}", + "year": "@{substring(utcnow(),0,4)}", + "month": "@{substring(utcnow(),5,2)}", + "day": "@{substring(utcnow(),8,2)}" + } + ``` ++ An equivalent Azure Data Factory expression would be: ++ ```json + @{ + concat( + '{', + '"datetime": "', utcnow(), '", ', + '"date": "', substring(utcnow(),0,10), '", ', + '"year": "', substring(utcnow(),0,4), '", ', + '"month": "', substring(utcnow(),5,2), '", ', + '"day": "', substring(utcnow(),8,2), '"', + '}' + ) + } + ``` ## Related content-Learn about another related control flow activity: -- [Set Variable Activity](control-flow-set-variable-activity.md)-- [Append Variable Activity](control-flow-append-variable-activity.md) +Learn about another related control flow activity: ++* [Set Variable Activity](control-flow-set-variable-activity.md) +* [Append Variable Activity](control-flow-append-variable-activity.md) |
databox-online | Azure Stack Edge Gpu Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-overview.md | |
databox-online | Azure Stack Edge Mini R Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-overview.md | |
databox-online | Azure Stack Edge Pro 2 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-overview.md | |
databox-online | Azure Stack Edge Pro R Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-overview.md | |
deployment-environments | Concept Azure Developer Cli With Deployment Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-azure-developer-cli-with-deployment-environments.md | Last updated 02/24/2024 In this article, you learn about Azure Developer CLI (`azd`) and how it works with Azure Deployment Environments (ADE) to provision application infrastructure and deploy application code to the new infrastructure. -The Azure Developer CLI (`azd`) is an open-source command-line tool that provides developer-friendly commands that map to key stages in your workflow. You can install `azd` locally on your machine or use it in other environments. +`azd` is an open-source command-line tool that provides developer-friendly commands that map to key stages in your workflow. You can install `azd` locally on your machine or use it in other environments. With ADE, you can create environments from an environment definition in a catalog attached to your dev center. By adding `azd`, you can deploy your application code to the new infrastructure. At scale, using ADE and `azd` together enables you to provide a way for develope The Azure Developer CLI commands are designed to work with standardized templates. Each template is a code repository that adheres to specific file and folder conventions. The templates contain the assets `azd` needs to provision an Azure Deployment Environment environment. When you run a command like `azd up`, the tool uses the template assets to execute various workflow steps, such as provisioning or deploying resources to Azure. -The following is a typical template structure: +The following diagram shows a typical template structure: ```txt Γö£ΓöÇΓöÇ infra [ Contains infrastructure as code files ] The following is a typical template structure: All `azd` templates include the following assets: -- *infra folder* - The infra folder is not used in `azd` with ADE. It contains all of the Bicep or Terraform infrastructure as code files for the azd template. ADE provides the infrastructure as code files for the `azd` template. You don't need to include these files in your `azd` template.+- *infra folder* - The infra folder is not used in `azd` with ADE. It contains all of the Bicep or Terraform infrastructure as code files for the `azd` template. ADE provides the infrastructure as code files for the `azd` template. You don't need to include these files in your `azd` template. - *azure.yaml file* - A configuration file that defines one or more services in your project and maps them to Azure resources for deployment. For example, you might define an API service and a web front-end service, each with attributes that map them to different Azure resources for deployment. All `azd` templates include the following assets: Most `azd` templates also optionally include one or more of the following folders: -- *.devcontainer folder* - Allows you to set up a Dev Container environment for your application. This is a common development environment approach that isn't specific to azd.+- *.devcontainer folder* - Allows you to set up a Dev Container environment for your application. This common development environment approach that isn't specific to `azd` . -- *.github folder* - Holds the CI/CD workflow files for GitHub Actions, which is the default CI/CD provider for azd.+- *.github folder* - Holds the CI/CD workflow files for GitHub Actions, which is the default CI/CD provider for `azd` . - *.azdo folder* - If you decide to use Azure Pipelines for CI/CD, define the workflow configuration files in this folder. To learn more about how to make your ADE environment definition compatible with ## Enable `azd` support in ADE -To enable `azd` support with ADE, you need to set the `platform.type` to devcenter. This configuration allows `azd` to leverage new dev center components for remote environment state and provisioning, and means that the infra folder in your templates will effectively be ignored. Instead, `azd` will use one of the infrastructure templates defined in your dev center catalog for resource provisioning. +To enable `azd` support with ADE, you need to set the `platform.type` to devcenter. This configuration allows `azd` to use new dev center components for remote environment state and provisioning, and means that the infra folder in your templates is ignored. Instead, `azd` uses one of the infrastructure templates defined in your dev center catalog for resource provisioning. To enable `azd` support, run the following command: To enable `azd` support, run the following command: ``` ### Explore `azd` commands -When the dev center feature is enabled, the default behavior of some common azd commands changes to work with these remote environments. For more information, see [Work with Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration?branch=main#work-with-azure-deployment-evironments). +When the dev center feature is enabled, the default behavior of some common `azd` commands changes to work with these remote environments. For more information, see [Work with Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration?branch=main#work-with-azure-deployment-evironments). ## Related content |
deployment-environments | How To Configure Azure Developer Cli Deployment Environments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-azure-developer-cli-deployment-environments.md | Title: Configure Azure Developer CLI templates for use with ADE -description: Understand how ADE and AZD work together to provision application infrastructure and deploy application code to the new infrastructure. + Title: Create an environment with Azure Developer CLI +description: Use an `azd` template to provision application infrastructure and deploy application code to the new infrastructure. Previously updated : 03/26/2024 Last updated : 10/11/2024 -# Customer intent: As a platform engineer, I want to use ADE and AZD together to provision application infrastructure and deploy application code to the new infrastructure. +# Customer intent: As a developer, I want to use ADE and `azd` together to provision application infrastructure and deploy application code to the new infrastructure. # Configure Azure Developer CLI with Azure Deployment Environments -In this article, you create a new environment from an existing Azure Developer CLI (AZD) compatible template by using AZD. You learn how to configure Azure Deployment Environments (ADE) and AZD to work together to provision application infrastructure and deploy application code to the new infrastructure. +In this article, you create a new environment from an existing Azure Developer CLI (`azd`) compatible template by using `azd`. You learn how to configure Azure Deployment Environments (ADE) and `azd` to work together to provision application infrastructure and deploy application code to the new infrastructure. -To learn the key concepts of how AZD and ADE work together, see [Use Azure Developer CLI with Azure Deployment Environments](concept-azure-developer-cli-with-deployment-environments.md). +To learn the key concepts of how `azd` and ADE work together, see [Use Azure Developer CLI with Azure Deployment Environments](concept-azure-developer-cli-with-deployment-environments.md). ## Prerequisites To learn the key concepts of how AZD and ADE work together, see [Use Azure Devel ## Attach Microsoft quick start catalog -Microsoft provides a quick start catalog that contains a set of AZD compatible templates that you can use to create environments. You can attach the quick start catalog to your dev center at creation or add it later. The quick start catalog contains a set of templates that you can use to create environments. +Microsoft provides a quick start catalog that contains a set of `azd` compatible templates that you can use to create environments. You can attach the quick start catalog to your dev center at creation or add it later. The quick start catalog contains a set of templates that you can use to create environments. -## Examine an AZD compatible template +## Examine an `azd` compatible template -You can use an existing AZD compatible template to create a new environment, or you can add an azure.yaml file to your repository. In this section, you examine an existing AZD compatible template. +You can use an existing `azd` compatible template to create a new environment, or you can add an azure.yaml file to your repository. In this section, you examine an existing `azd` compatible template. -AZD provisioning for environments relies on curated templates from the catalog. Templates in the catalog might assign tags to provisioned Azure resources for you to associate your app services with in the azure.yaml file, or specify the resources explicitly. In this example, resources are specified explicitly. +`azd` provisioning for environments relies on curated templates from the catalog. Templates in the catalog might assign tags to provisioned Azure resources for you to associate your app services with in the azure.yaml file, or specify the resources explicitly. In this example, resources are specified explicitly. For more information on tagging resources, see [Tagging resources for Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration#tagging-resources-for-azure-deployment-environments). For more information on tagging resources, see [Tagging resources for Azure Depl :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/catalog-url.png" alt-text="Screenshot of Azure portal showing the catalogs attached to a dev center, with clone URL highlighted." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/catalog-url.png"::: -1. To view the quick start catalog in GitHub, paste the **Clone URL** into the address bar and press Enter. +1. To view the quick start catalog in GitHub, paste the **Clone URL** into the address bar and press Enter. Or, you can use the following URL: [Microsoft quick start catalog](https://aka.ms/deployment-environments/quickstart-catalog). 1. In the GitHub repository, navigate to the **Environment-Definitions/ARMTemplates/Function-App-with-Cosmos_AZD-template** folder. For more information on tagging resources, see [Tagging resources for Azure Depl 1. In the azure.yaml file, in the **services** section, you see the **web** and **API** services that are defined in the template. > [!NOTE]-> Not all AZD compatible catalogs use the linked templates structure shown in the example. You can use a single catalog for all your environments by including the azure.yaml file. Using multiple catalogs and code repositories allows you more flexibility in configuring secure access for platform engineers and developers. +> Not all `azd` compatible catalogs use the linked templates structure shown in the example. You can use a single catalog for all your environments by including the azure.yaml file. Using multiple catalogs and code repositories allows you more flexibility in configuring secure access for platform engineers and developers. If you're working with your own catalog & environment definition, you can create an azure.yaml file in the root of your repository. Use the azure.yaml file to define the services that you want to deploy to the environment. ## Create an environment from an existing template -Use an existing AZD compatible template to create a new environment. +Use an existing `azd` compatible template to create a new environment. -### Prepare to work with AZD +### Prepare to work with `azd` -When you work with AZD for the first time, there are some one-time setup tasks you need to complete. These tasks include installing the Azure Developer CLI, signing in to your Azure account, and enabling AZD support for Azure Deployment Environments. +When you work with `azd` for the first time, there are some one-time setup tasks you need to complete. These tasks include installing the Azure Developer CLI, signing in to your Azure account, and enabling `azd` support for Azure Deployment Environments. -#### Install the Azure Developer CLI extension for Visual Studio Code +#### Install the Azure Developer CLI extension -When you install AZD, the AZD tools are installed within an AZD scope rather than globally, and are removed if AZD is uninstalled. You can install AZD in Visual Studio Code or from the command line. +When you install `azd`, the `azd` tools are installed within an `azd` scope rather than globally, and are removed if `azd` is uninstalled. You can install `azd` in Visual Studio Code, from the command line, or in Visual Studio. # [Visual Studio Code](#tab/visual-studio-code) powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' Access your Azure resources by logging in. When you initiate a log in, a browser window opens and prompts you to log in to Azure. After you sign in, the terminal displays a message that you're signed in to Azure. -Sign in to AZD using the command palette: +Sign in to `azd` using the command palette: # [Visual Studio Code](#tab/visual-studio-code) Sign in to Azure at the CLI using the following command: -#### Enable AZD support for ADE +#### Enable `azd` support for ADE -When `platform.type` is set to `devcenter`, all AZD remote environment state and provisioning uses dev center components. AZD uses one of the infrastructure templates defined in your dev center catalog for resource provisioning. In this configuration, the *infra* folder in your local templates isn't used. +When `platform.type` is set to `devcenter`, all `azd` remote environment state and provisioning uses dev center components. `azd` uses one of the infrastructure templates defined in your dev center catalog for resource provisioning. In this configuration, the *infra* folder in your local templates isn't used. # [Visual Studio Code](#tab/visual-studio-code) When `platform.type` is set to `devcenter`, all AZD remote environment state and ### Create a new environment -Now you're ready to create an environment to work in. You begin with an existing template. ADE defines the infrastructure for your application, and the AZD template provides sample application code. +Now you're ready to create an environment to work in. You begin with an existing template. ADE defines the infrastructure for your application, and the `azd` template provides sample application code. # [Visual Studio Code](#tab/visual-studio-code) Now you're ready to create an environment to work in. You begin with an existing :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/command-palette-functionapp-template.png" alt-text="Screenshot of the Visual Studio Code command palette with a list of templates, Function App highlighted." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/command-palette-functionapp-template.png"::: -1. In the AZD terminal, enter an environment name. +1. In the `azd` terminal, enter an environment name. :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/enter-environment-name.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt for a new environment name." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/enter-environment-name.png"::: Now you're ready to create an environment to work in. You begin with an existing :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-environment-definition.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt to select an environment definition." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-environment-definition.png"::: - AZD creates the project resources, including an *azure.yaml* file in the root of your project. + `azd` creates the project resources, including an *azure.yaml* file in the root of your project. # [Azure Developer CLI](#tab/azure-developer-cli) 1. At the CLI, navigate to an empty folder. -1. To list the templates available, in the AZD terminal, run the following command: +1. To list the templates available, in the `azd` terminal, run the following command: ```bash azd template list Now you're ready to create an environment to work in. You begin with an existing ```bash azd init ```-1. In the AZD terminal, enter an environment name. +1. In the `azd` terminal, enter an environment name. :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/enter-environment-name.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt for a new environment name." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/enter-environment-name.png"::: Now you're ready to create an environment to work in. You begin with an existing :::image type="content" source="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-environment-definition.png" alt-text="Screenshot of the Azure Developer terminal, showing prompt to select an environment definition." lightbox="media/how-to-configure-azure-developer-cli-deployment-environments/initialize-select-environment-definition.png"::: - AZD creates the project resources, including an *azure.yaml* file in the root of your project. + `azd` creates the project resources, including an *azure.yaml* file in the root of your project. ## Configure your devcenter -You can define AZD settings for your dev centers so that you don't need to specify them each time you update an environment. In this example, you define the names of the catalog, dev center, and project that you're using for your environment. +You can define `azd` settings for your dev centers so that you don't need to specify them each time you update an environment. In this example, you define the names of the catalog, dev center, and project that you're using for your environment. 1. In Visual Studio Code, navigate to the *azure.yaml* file in the root of your project. To learn more about the settings you can configure, see [Configure dev center se ## Provision your environment -You can use AZD to provision and deploy resources to your deployment environments using commands like `azd up` or `azd provision`. +You can use `azd` to provision and deploy resources to your deployment environments using commands like `azd up` or `azd provision`. To learn more about provisioning your environment, see [Create an environment by using the Azure Developer CLI](how-to-create-environment-with-azure-developer.md#provision-infrastructure-to-azure-deployment-environment). -To how common AZD commands work with ADE, see [Work with Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration?branch=main#work-with-azure-deployment-evironments). +To how common `azd` commands work with ADE, see [Work with Azure Deployment Environments](/azure/developer/azure-developer-cli/ade-integration?branch=main#work-with-azure-deployment-evironments). ## Related content |
dev-box | How To Add Project Pool Display Name | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-add-project-pool-display-name.md | Title: Add a display name to projects and pools description: Learn how to add clarity for developers by adding a display name to Projects and Pools to rename resources. -+ Last updated 05/28/2024 |
hdinsight | Hdinsight Component Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md | Title: Open-source components and versions - Azure HDInsight description: Learn about the open-source components and versions in Azure HDInsight. Previously updated : 10/04/2024 Last updated : 11/04/2024 # Azure HDInsight versions Support is defined as a time period that a HDInsight version supported by Micros | Use existing cluster without support | Yes | Yes | Yes | | Create Cluster | Yes | Yes | No | | Scale up/down cluster | Yes | Yes | No |-| Troubleshoot runtime issues | No | No | No | -| RCA | No | No | No | -| Performance Tuning | No | No | No | -| Assistance in onboarding | No | No | No | -| Spark core issues/updates | No | No | No | -| Security/CVE updates | No | No | No | +| Troubleshoot runtime issues | Yes | No | No | +| RCA | Yes | No | No | +| Performance Tuning | Yes | No | No | +| Assistance in onboarding | Yes | No | No | +| Spark core issues/updates | Yes | No | No | +| Security/CVE updates | Yes | No | No | ### Standard support |
internet-peering | Howto Legacy Direct Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-legacy-direct-portal.md | -> [!div class="op_single_selector"] -> - [Azure portal](howto-legacy-direct-portal.md) -> - [PowerShell](howto-legacy-direct-powershell.md) +In this article, you learn how to convert an existing legacy Direct peering to an Azure resource by using the Azure portal. -This article describes how to convert an existing legacy Direct peering to an Azure resource by using the Azure portal. +If you prefer, you can complete this guide using [PowerShell](howto-legacy-direct-powershell.md). -If you prefer, you can complete this guide by using [PowerShell](howto-legacy-direct-powershell.md). +## Prerequisites -## Before you begin +- Complete the [Prerequisites to set up peering with Microsoft](prerequisites.md) before you begin configuration. -* Review the [prerequisites](prerequisites.md) and the [Direct peering walkthrough](walkthrough-direct-all.md) before you begin configuration. +- A legacy Direct peering in your subscription. -## Convert a legacy Direct peering to an Azure resource +## Sign in to Azure -### Sign in to the portal and select your subscription +Sign in to the [Azure portal](https://portal.azure.com). -### <a name=create></a>Convert a legacy Direct peering +## Convert a legacy Direct peering -As an Internet Service Provider, you can convert legacy direct peering connections by using [Creating a Peering]( https://go.microsoft.com/fwlink/?linkid=2129593). +As an Internet Service Provider, you can convert a legacy direct peering connection to an Azure resource using the Azure portal: -1. On the **Create a Peering** page, on the **Basics** tab, fill in the boxes as shown here: +1. In the search box at the top of the portal, enter ***peering***. Select **Peerings** from the search results. - > [!div class="mx-imgBorder"] - > ![Register Peering Service](./media/setup-basics-tab.png) +1. On the **Peerings** page, select **+ Create**. -* Select your Azure Subscription. +1. On the **Basics** tab of **Create a Peering** page, enter, or select the following values: -* For Resource group, you can either choose an existing resource group from the drop-down list or create a new group by selecting Create new. We'll create a new resource group for this example. + | Setting | Value | + | | | + | **Project Details** | | + | Subscription | Select your Azure subscription. | + | Resource Group | Select a resource group or create a new one. | + | **Instance details** | | + | Name | Enter a name for the peering you're creating. | + | Peer ASN | Select your ASN. | -* Name corresponds to the resource name and can be anything you choose. + :::image type="content" source="./media/howto-legacy-direct-portal/peering-basics.png" alt-text="Screenshot that shows the Basics tab of creating a peering in the Azure portal." lightbox="./media/howto-legacy-direct-portal/peering-basics.png"::: -* Region is auto-selected if you chose an existing resource group. If you chose to create a new resource group, you also need to choose the Azure region where you want the resource to reside. + >[!IMPORTANT] + >You can only choose an ASN with ValidationState as Approved before you submit a peering request. If you just submitted your Peer ASN request, wait for 12 hours or so for ASN association to be approved. If the ASN you select is pending validation, you'll see an error message. If you don't see the ASN you need to choose, check that you selected the correct subscription. If so, check if you have already created Peer ASN by using **[Associate Peer ASN to Azure subscription](https://go.microsoft.com/fwlink/?linkid=2129592)**. ->[!NOTE] ->The region where a resource group resides is independent of the location where you want to create peering with Microsoft. But it's a best practice to organize your peering resources within resource groups that reside in the closest Azure regions. For example, for peerings in Ashburn, you can create a resource group in East US or East US2. +1. Select **Next : Configuration >**. -* Select your ASN in the **Peer ASN** box. +1. On the **Configuration** tab, enter or select the following values: ->[!IMPORTANT] ->You can only choose an ASN with ValidationState as Approved before you submit a peering request. If you just submitted your Peer ASN request, wait for 12 hours or so for ASN association to be approved. If the ASN you select is pending validation, you'll see an error message. If you don't see the ASN you need to choose, check that you selected the correct subscription. If so, check if you have already created Peer ASN by using **[Associate Peer ASN to Azure subscription](https://go.microsoft.com/fwlink/?linkid=2129592)**. + | Setting | Value | + | | | + | Peering type | Select **Direct**. | + | Microsoft network | Select **AS8075**. | + | SKU | Select **Basic Free**. Don't select **Premium Free** as it's reserved for special applications. | + | Metro | Select the metro location where you want to convert a peering to an Azure resource. If you have peering connections with Microsoft in the selected location that aren't converted, you can see them listed in the **Peering connections**. | -#### Launch the resource and configure basic settings + :::image type="content" source="./media/howto-legacy-direct-portal/peering-configuration.png" alt-text="Screenshot that shows the Configuration tab of creating a peering in the Azure portal." lightbox="./media/howto-legacy-direct-portal/peering-configuration.png"::: -#### Configure connections and submit + > [!NOTE] + > If you want to add additional peering connections with Microsoft in the selected **Metro** location, select **Create new**. For more information, see [Create or modify a Direct peering by using the portal](howto-direct-portal.md). -### <a name=get></a>Verify Direct peering +1. Select **Review + create**. ++1. Review the settings, and then select **Create**. ++## Verify Direct peering ++1. Go to the **Peering** resource you created in the previous section. ++1. Under **Settings**, select **Connections** to see a summary of peering connections between your ASN and Microsoft. ++ :::image type="content" source="./media/howto-legacy-direct-portal/peering-connections.png" alt-text="Screenshot that shows the peering connections in the Azure portal." lightbox="./media/howto-legacy-direct-portal/peering-connections.png"::: ++ - **Connection State** corresponds to the state of the peering connection setup. The states displayed in this field follow the state diagram shown in the [Direct peering walkthrough](walkthrough-direct-all.md). + - **IPv4 Session State** and **IPv6 Session State** correspond to the IPv4 and IPv6 BGP session states, respectively. ## Related content -- [Create or modify a Direct peering by using the portal](howto-direct-portal.md).-- [Internet peering frequently asked questions (FAQ)](faqs.md).+- [Create or modify a Direct peering by using the portal](howto-direct-portal.md) +- [Internet peering frequently asked questions (FAQ)](faqs.md) |
iot-dps | Concepts Device Oem Security Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-oem-security-practices.md | Title: Security practices for manufacturers - Azure IoT Device Provisioning Serv description: Overviews common security practices for OEMs and device manufactures who prepare devices to enroll in Azure IoT Device Provisioning Service (DPS). Previously updated : 3/02/2020 Last updated : 10/11/2024 As more manufacturers release IoT devices, it's helpful to identify guidance aro > * Integrating a Trusted Platform Module (TPM) into the manufacturing process ## Selecting device authentication options-The ultimate aim of any IoT device security measure is to create a secure IoT solution. But issues such as hardware limitations, cost, and level of security expertise all impact which options you choose. Further, your approach to security impacts how your IoT devices connect to the cloud. While there are [several elements of IoT security](https://www.microsoft.com/research/publication/seven-properties-highly-secure-devices/) to consider, a key element that every customer encounters is what authentication type to use. +The ultimate aim of any IoT device security measure is to create a secure IoT solution. But issues such as hardware limitations, cost, and level of security expertise all affect which options you choose. Further, your approach to security impacts how your IoT devices connect to the cloud. While there are [several elements of IoT security](https://www.microsoft.com/research/publication/seven-properties-highly-secure-devices/) to consider, a key element that every customer encounters is what authentication type to use. -Three widely used authentication types are X.509 certificates, Trusted Platform Modules (TPM), and symmetric keys. While other authentication types exist, most customers who build solutions on Azure IoT use one of these three types. The rest of this article surveys pros and cons of using each authentication type. +Three widely used authentication types are X.509 certificates, Trusted Platform Modules (TPM), and symmetric keys. While other authentication types exist, most customers who build solutions on Azure IoT use one of these three types. The rest of this article reviews the pros and cons of using each authentication type. ### X.509 certificate X.509 certificates are a type of digital identity you can use for authentication. The X.509 certificate standard is documented in [IETF RFC 5280](https://tools.ietf.org/html/rfc5280). In Azure IoT, there are two ways to authenticate certificates: Pros for X.509: - Many vendors are available to provide X.509 based authentication solutions. Cons for X.509:-- Many customers may need to rely on external vendors for their certificates.+- Many customers might need to rely on external vendors for their certificates. - Certificate management can be costly and adds to total solution cost.-- Certificate life-cycle management can be difficult if logistics are not well thought out. +- Certificate life-cycle management can be difficult if logistics aren't well thought out. ### Trusted Platform Module (TPM) TPM, also known as [ISO/IEC 11889](https://www.iso.org/standard/66510.html), is a standard for securely generating and storing cryptographic keys. TPM also refers to a virtual or physical I/O device that interacts with modules that implement the standard. A TPM device can exist as discrete hardware, integrated hardware, a firmware-based module, or a software-based module. Pros for TPM: Cons for TPM: - TPMs are complex and can be difficult to use. - Application development with TPMs is difficult unless you have a physical TPM or a quality emulator.-- You may have to redesign the board of your device to include a TPM in the hardware. +- You might have to redesign the board of your device to include a TPM in the hardware. - If you roll the EK on a TPM, it destroys the identity of the TPM and creates a new one. Although the physical chip stays the same, it has a new identity in your IoT solution. ### Symmetric key Pro for shared symmetric key: - Simple to implement and inexpensive to produce at scale. Cons for shared symmetric key: -- Highly vulnerable to attack. The benefit of easy implementation is far outweighed by the risk. +- Highly vulnerable to attack. The risk of attack outweighs the benefit of easy implementation. - Anyone can impersonate your devices if they obtain the shared key.-- If you rely on a shared symmetric key that becomes compromised, you will likely lose control of the devices. +- If you rely on a shared symmetric key that becomes compromised, you'll likely lose control of the devices. ### Making the right choice for your devices To choose an authentication method, make sure you consider the benefits and costs of each approach for your unique manufacturing process. For device authentication, usually there's an inverse relationship between how secure a given approach is, and how convenient it is. ## Installing certificates on IoT devices-If you use X.509 certificates to authenticate your IoT devices, this section offers guidance on how to integrate certificates into your manufacturing process. You'll need to make several decisions. These include decisions about common certificate variables, when to generate certificates, and when to install them. +If you use X.509 certificates to authenticate your IoT devices, this section offers guidance on how to integrate certificates into your manufacturing process. You need to make several decisions, including decisions about common certificate variables, when to generate certificates, and when to install them. -If you're used to using passwords, you might ask why you can't use the same certificate in all your devices, in the same way that you'd be able to use the same password in all your devices. First, using the same password everywhere is dangerous. The practice has exposed companies to major DDoS attacks, including the one that took down DNS on the US East Coast several years ago. Never use the same password everywhere, even with personal accounts. Second, a certificate isn't a password, it's a unique identity. A password is like a secret code that anyone can use to open a door at a secured building. It's something you know, and you could give the password to anyone to gain entrance. A certificate is like a driver's license with your photo and other details, which you can show to a guard to get into a secured building. It's tied to who you are. Provided that the guard accurately matches people with driver's licenses, only you can use your license (identity) to gain entrance. +If you're accustomed to using passwords, you might wonder why you can't use the same certificate across all your devices, similar to how you might use the same password. First, using the same password everywhere is risky and has led to significant DDoS attacks, such as the one that disrupted DNS on the US East Coast several years ago. Never reuse passwords, even for personal accounts. Second, a certificate isn't a password; it's a unique identity. A password is like a secret code that anyone can use to unlock a door in a secure building--it's something you know and can share. In contrast, a certificate is like a driver's license with your photo and details, which you show to a guard to gain access. It's tied to your identity. As long as the guard correctly matches people with their licenses, only you can use your license (identity) to gain entry. ### Variables involved in certificate decisions Consider the following variables, and how each one impacts the overall manufacturing process. It can be costly and complex to manage a public key infrastructure (PKI). Espec - Use the [Azure Sphere](https://azure.microsoft.com/services/azure-sphere/) security service. This option applies only to Azure Sphere devices. #### Where certificates are stored-There are a few factors that impact the decision on where certificates are stored. These factors include the type of device, expected profit margins (whether you can afford secure storage), device capabilities, and existing security technology on the device that you may be able to use. Consider the following options: +There are a few factors that affect the decision on where certificates are stored. These factors include the type of device, expected profit margins (whether you can afford secure storage), device capabilities, and existing security technology on the device that you might be able to use. Consider the following options: - In a hardware security module (HSM). Using an HSM is highly recommended. Check whether your device's control board already has an HSM installed. If you know you don't have an HSM, work with your hardware manufacturer to identify an HSM that meets your needs. - In a secure place on disk such as a trusted execution environment (TEE). - In the local file system or a certificate store. For example, the Windows certificate store. #### Connectivity at the factory-Connectivity at the factory determines how and when you'll get the certificates to install on the devices. Connectivity options are as follows: +Connectivity at the factory determines how and when you get the certificates to install on the devices. Connectivity options are as follows: - Connectivity. Having connectivity is optimal, it streamlines the process of generating certificates locally. - No connectivity. In this case, you use a signed certificate from a CA to generate device certificates locally and offline. - No connectivity. In this case, you can obtain certificates that were generated ahead of time. Or you can use an offline PKI to generate certificates locally. Connectivity at the factory determines how and when you'll get the certificates #### Audit requirement Depending on the type of devices you produce, you might have a regulatory requirement to create an audit trail of how device identities are installed on your devices. Auditing adds significant production cost. So in most cases, only do it if necessary. If you're unsure whether an audit is required, check with your company's legal department. Auditing options are: - Not a sensitive industry. No auditing is required.-- Sensitive industry. Certificates should be installed in a secure room according to compliance certification requirements. If you need a secure room to install certificates, you are likely already aware of how certificates get installed in your devices. And you probably already have an audit system in place. +- Sensitive industry. Certificates should be installed in a secure room according to compliance certification requirements. If you need a secure room to install certificates, you're likely already aware of how certificates get installed in your devices. And you probably already have an audit system in place. #### Length of certificate validity-Like a driver's license, certificates have an expiration date that is set when they are created. Here are the options for length of certificate validity: -- Renewal not required. This approach uses a long renewal period, so you'll never need to renew the certificate during the device's lifetime. While such an approach is convenient, it's also risky. You can reduce the risk by using secure storage like an HSM on your devices. However, the recommended practice is to avoid using long-lived certificates.-- Renewal required. You'll need to renew the certificate during the lifetime of the device. The length of the certificate validity depends on context, and you'll need a strategy for renewal. The strategy should include where you're getting certificates, and what type of over-the-air functionality your devices have to use in the renewal process. +Like a driver's license, certificates have an expiration date that is set when they're created. Here are the options for length of certificate validity: +- Renewal not required. This approach uses a long renewal period, so you'll never need to renew the certificate during the device's lifetime. While such an approach is convenient, it's also risky. You can reduce the risk by using secure storage like an HSM on your devices. However, the recommended practice is to avoid using long-lived certificates. +- Renewal required. You need to renew the certificate during the lifetime of the device. The length of the certificate validity depends on context, and you need a strategy for renewal. The strategy should include where you're getting certificates, and what type of over-the-air functionality your devices have to use in the renewal process. ### When to generate certificates-The internet connectivity capabilities at your factory will impact your process for generating certificates. You have several options for when to generate certificates: +The internet connectivity capabilities at your factory impact your process for generating certificates. You have several options for when to generate certificates: -- Pre-loaded certificates. Some HSM vendors offer a premium service in which the HSM vendor installs certificates for the customer. First, customers give the HSM vendor access to a signing certificate. Then the HSM vendor installs certificates signed by that signing certificate onto each HSM the customer buys. All the customer has to do is install the HSM on the device. While this service adds cost, it helps to streamline your manufacturing process. And it resolves the question of when to install certificates.+- Preloaded certificates. Some HSM vendors offer a premium service in which the HSM vendor installs certificates for the customer. First, customers give the HSM vendor access to a signing certificate. Then the HSM vendor installs certificates signed by that signing certificate onto each HSM the customer buys. All the customer has to do is install the HSM on the device. While this service adds cost, it helps to streamline your manufacturing process. And it resolves the question of when to install certificates. - Device-generated certificates. If your devices generate certificates internally, then you must extract the public X.509 certificate from the device to enroll it in DPS. - Connected factory. If your factory has connectivity, you can generate device certificates whenever you need them.-- Offline factory with your own PKI. If your factory does not have connectivity, and you are using your own PKI with offline support, you can generate the certificates when you need them.-- Offline factory with third-party PKI. If your factory does not have connectivity, and you are using a third-party PKI, you must generate the certificates ahead of time. And it will be necessary to generate the certificates from a location that has connectivity. +- Offline factory with your own PKI. If your factory doesn't have connectivity, and you're using your own PKI with offline support, you can generate the certificates when you need them. +- Offline factory with third-party PKI. If your factory doesn't have connectivity, and you're using a third-party PKI, you must generate the certificates ahead of time. And it is necessary to generate the certificates from a location that has connectivity. ### When to install certificates After you generate certificates for your IoT devices, you can install them in the devices. -If you use pre-loaded certificates with an HSM, the process is simplified. After the HSM is installed in the device, the device code can access it. Then you'll call the HSM APIs to access the certificate that's stored in the HSM. This approach is the most convenient for your manufacturing process. +If you use preloaded certificates with an HSM, the process is simplified. After the HSM is installed in the device, the device code can access it. Then you call the HSM APIs to access the certificate stored in the HSM. This approach is the most convenient for your manufacturing process. -If you don't use a pre-loaded certificate, you must install the certificate as part of your production process. The simplest approach is to install the certificate in the HSM at the same time that you flash the initial firmware image. Your process must add a step to install the image on each device. After this step, you can run final quality checks and any other steps, before you package and ship the device. +If you don't use a preloaded certificate, you must install the certificate as part of your production process. The simplest approach is to install the certificate in the HSM at the same time that you flash the initial firmware image. Your process must add a step to install the image on each device. After this step, you can run final quality checks and any other steps, before you package and ship the device. -There are software tools available that let you run the installation process and final quality check in a single step. You can modify these tools to generate a certificate, or to pull a certificate from a pre-generated certificate store. Then the software can install the certificate where you need to install it. Software tools of this type enable you to run production quality manufacturing at scale. +There are software tools available that let you run the installation process and final quality check in a single step. You can modify these tools to generate a certificate, or to pull a certificate from a pregenerated certificate store. Then the software can install the certificate where you need to install it. Software tools of this type enable you to run production quality manufacturing at scale. After you have certificates installed on your devices, the next step is to learn how to enroll the devices with [DPS](about-iot-dps.md). ## Integrating a TPM into the manufacturing process-If you use a TPM to authenticate your IoT devices, this section offers guidance. The guidance covers the widely used TPM 2.0 devices that have hash-based message authentication code (HMAC) key support. The TPM specification for TPM chips is an ISO standard that's maintained by the Trusted Computing Group. For more on TPM, see the specifications for [TPM 2.0](https://trustedcomputinggroup.org/tpm-library-specification/) and [ISO/IEC 11889](https://www.iso.org/standard/66510.html). +If you use a TPM to authenticate your IoT devices, this section offers guidance. The guidance covers the widely used TPM 2.0 devices that have hash-based message authentication code (HMAC) key support. The TPM specification for TPM chips is an ISO standard maintained by the Trusted Computing Group. For more on TPM, see the specifications for [TPM 2.0](https://trustedcomputinggroup.org/tpm-library-specification/) and [ISO/IEC 11889](https://www.iso.org/standard/66510.html). ### Taking ownership of the TPM-A critical step in manufacturing a device with a TPM chip is to take ownership of the TPM. This step is required so that you can provide a key to the device owner. The first step is to extract the endorsement key (EK) from the device. The next step is to actually claim ownership. How you accomplish this depends on which TPM and operating system you use. If needed, contact the TPM manufacturer or the developer of the device operating system to determine how to claim ownership. +A critical step in manufacturing a device with a TPM chip is to take ownership of the TPM. This step is required so that you can provide a key to the device owner. The first step is to extract the endorsement key (EK) from the device. The next step is to actually claim ownership. How you extract the endorsement key depends on which TPM and operating system you use. If needed, contact the TPM manufacturer or the developer of the device operating system to determine how to claim ownership. In your manufacturing process, you can extract the EK and claim ownership at different times, which adds flexibility. Many manufacturers take advantage of this flexibility by adding a hardware security module (HSM) to enhance the security of their devices. This section provides guidance on when to extract the EK, when to claim ownership of the TPM, and considerations for integrating these steps into a manufacturing timeline. In your manufacturing process, you can extract the EK and claim ownership at dif The following timeline shows how a TPM goes through a production process and ends up in a device. Each manufacturing process is unique, and this timeline shows the most common patterns. The timeline offers guidance on when to take certain actions with the keys. #### Step 1: TPM is manufactured-- If you buy TPMs from a manufacturer for use in your devices, see if they'll extract public endorsement keys (EK_pubs) for you. It's helpful if the manufacturer provides the list of EK_pubs with the shipped devices. +- If you buy TPMs from a manufacturer for use in your devices, see if they extract public endorsement keys (EK_pubs) for you. It's helpful if the manufacturer provides the list of EK_pubs with the shipped devices. > [!NOTE] > You could give the TPM manufacturer write access to your enrollment list by using shared access policies in your provisioning service. This approach lets them add the TPMs to your enrollment list for you. But that is early in the manufacturing process, and it requires trust in the TPM manufacturer. Do so at your own risk. The following timeline shows how a TPM goes through a production process and end - If you manufacture TPMs to use with your own devices, identify which point in your process is the most convenient to extract the EK_pub. You can extract the EK_pub at any of the remaining points in the timeline. #### Step 2: TPM is installed into a device-At this point in the production process, you should know which DPS instance the device will be used with. As a result, you can add devices to the enrollment list for automated provisioning. For more information about automatic device provisioning, see the [DPS documentation](about-iot-dps.md). +At this point in the production process, you should know which DPS instance the device is used with. As a result, you can add devices to the enrollment list for automated provisioning. For more information about automatic device provisioning, see the [DPS documentation](about-iot-dps.md). - If you haven't extracted the EK_pub, now is a good time to do so. - Depending on the installation process of the TPM, this step is also a good time to take ownership of the TPM. #### Step 3: Device has firmware and software installed At this point in the process, install the DPS client along with the ID scope and global URL for provisioning.-- Now is the last chance to extract the EK_pub. If a third party will install the software on your device, it's a good idea to extract the EK_pub first.+- Now is the last chance to extract the EK_pub. If a third party installs the software on your device, it's a good idea to extract the EK_pub first. - This point in the manufacturing process is ideal to take ownership of the TPM. > [!NOTE] > If you're using a software TPM, you can install it now. Extract the EK_pub at the same time. |
iot-hub | Quickstart Send Telemetry Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-send-telemetry-cli.md | -IoT Hub is an Azure service that enables you to ingest high volumes of telemetry from your IoT devices into the cloud for storage or processing. In this codeless quickstart, you use the Azure CLI to create an IoT hub and a simulated device. You'll send device telemetry to the hub, and send messages, call methods, and update properties on the device. You'll also use the Azure portal to visualize device metrics. This article shows a basic workflow for developers who use the CLI to interact with an IoT Hub application. +IoT Hub is an Azure service designed to collect large volumes of telemetry data from IoT devices for storage or processing in the cloud. In this codeless quickstart, you use the Azure CLI to create an IoT hub and a simulated device. You send device telemetry to the hub, and also send messages, call methods, and update properties on the device. Additionally, you use the Azure portal to visualize device metrics. This article provides a basic workflow for developers using the CLI to interact with an IoT Hub application. ## Prerequisites To launch the Cloud Shell: ## Prepare two CLI sessions -Next, you prepare two Azure CLI sessions. If you're using the Cloud Shell, you'll run these sessions in separate Cloud Shell tabs. If using a local CLI client, you'll run separate CLI instances. Use the separate CLI sessions for the following tasks: +Next, you prepare two Azure CLI sessions. If you're using the Cloud Shell, you run these sessions in separate Cloud Shell tabs. If using a local CLI client, you run separate CLI instances. Use the separate CLI sessions for the following tasks: - The first session simulates an IoT device that communicates with your IoT hub. - The second session either monitors the device in the first session, or sends messages, commands, and property updates. Azure CLI requires you to be logged into your Azure account. All communication b ## Create an IoT hub -In this section, you use the Azure CLI to create a resource group and an IoT hub. An Azure resource group is a logical container into which Azure resources are deployed and managed. An IoT hub acts as a central message hub for bi-directional communication between your IoT application and the devices. +In this section, you use the Azure CLI to create a resource group and an IoT hub. An Azure resource group is a logical container into which Azure resources are deployed and managed. An IoT hub acts as a central message hub for bi-directional communication between your IoT application and the devices. 1. In the first CLI session, run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *eastus* location. To create and start a simulated device: 1. In the first CLI session, run the [az iot hub device-identity create](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create) command. This command creates the simulated device identity. - *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. + *YourIotHubName*. Replace this placeholder in the following code with the name you chose for your IoT hub. *simDevice*. You can use this name directly for the simulated device in the rest of this quickstart. Optionally, use a different name. To create and start a simulated device: az iot hub device-identity create -d simDevice -n {YourIoTHubName} ``` -1. In the first CLI session, run the [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate) command. This command starts the simulated device. The device sends telemetry to your IoT hub and receives messages from it. +1. In the first CLI session, run the [az iot device simulate](/cli/azure/iot/device#az-iot-device-simulate) command. This command starts the simulated device. The device sends telemetry to your IoT hub and receives messages from it. - *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. + *YourIotHubName*. Replace this placeholder in the following code with the name you chose for your IoT hub. ```azurecli az iot device simulate -d simDevice -n {YourIoTHubName} To monitor a device: 1. In the second CLI session, run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. This command continuously monitors the simulated device. The output shows telemetry such as events and property state changes that the simulated device sends to the IoT hub. - *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. + *YourIotHubName*. Replace this placeholder in the following code with the name you chose for your IoT hub. ```azurecli az iot hub monitor-events --output table -p all -n {YourIoTHubName} In this section, you send a message to the simulated device. 1. In the first CLI session, confirm that the simulated device is still running. If the device stopped, run the following command to restart it: - *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. + *YourIotHubName*. Replace this placeholder in the following code with the name you chose for your IoT hub. ```azurecli az iot device simulate -d simDevice -n {YourIoTHubName} In this section, you send a message to the simulated device. 1. In the second CLI session, run the [az iot device c2d-message send](/cli/azure/iot/device/c2d-message#az-iot-device-c2d-message-send) command. This command sends a cloud-to-device message from your IoT hub to the simulated device. The message includes a string and two key-value pairs. - *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. + *YourIotHubName*. Replace this placeholder in the following code with the name you chose for your IoT hub. ```azurecli az iot device c2d-message send -d simDevice --data "Hello World" --props "key0=value0;key1=value1" -n {YourIoTHubName} ``` - Optionally, you can send cloud-to-device messages by using the Azure portal. To do this, browse to the overview page for your IoT Hub, select **IoT Devices**, select the simulated device, and select **Message to Device**. + Optionally, you can send cloud-to-device messages by using the Azure portal. To send messages through the Azure portal, browse to the overview page for your IoT Hub, select **IoT Devices**, select the simulated device, and select **Message to Device**. 1. In the first CLI session, confirm that the simulated device received the message. In this section, you call a direct method on the simulated device. 1. In the second CLI session, run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command. In this example, there's no preexisting method for the device. The command calls an example method name on the simulated device and returns a payload. - *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. + *YourIotHubName*. Replace this placeholder in the following code with the name you chose for your IoT hub. ```azurecli az iot hub invoke-device-method --mn MySampleMethod -d simDevice -n {YourIoTHubName} In this section, you update the state of the simulated device by setting propert 1. In the second CLI session, run the [az iot hub device-twin update](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-update) command. This command updates the properties to the desired state on the IoT hub device twin that corresponds to your simulated device. In this case, the command sets example temperature condition properties. > [!IMPORTANT]- > If you're using PowerShell in the CLI shell, use the PowerShell version of the command below. PowerShell requires you to escape the characters in the JSON payload. + > If you're using PowerShell in the CLI shell, use the PowerShell version of the command in the following code. PowerShell requires you to escape the characters in the JSON payload. - *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. + *YourIotHubName*. Replace this placeholder in the following code with the name you chose for your IoT hub. ```azurecli az iot hub device-twin update -d simDevice --desired '{"conditions":{"temperature":{"warning":98, "critical":107}}}' -n {YourIoTHubName} In this section, you update the state of the simulated device by setting propert 1. In the second CLI session, run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command. This command reports changes to the device properties. - *YourIotHubName*. Replace this placeholder below with the name you chose for your IoT hub. + *YourIotHubName*. Replace this placeholder in the following code with the name you chose for your IoT hub. ```azurecli az iot hub device-twin show -d simDevice --query properties.reported -n {YourIoTHubName} To visualize messaging metrics in the Azure portal: If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete them. -If you continue to the next recommended article, you can keep the resources you've already created and reuse them. +If you continue to the next recommended article, you can keep the resources you already created and reuse them. > [!IMPORTANT] > Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. |
iot-operations | Howto Configure Adlsv2 Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-adlsv2-endpoint.md | To send data to Azure Data Lake Storage Gen2 in Azure IoT Operations Preview, yo - An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md) - A [configured dataflow profile](howto-configure-dataflow-profile.md) - A [Azure Data Lake Storage Gen2 account](../../storage/blobs/create-data-lake-storage-account.md)+- A pre-created storage container in the storage account ## Create an Azure Data Lake Storage Gen2 dataflow endpoint To configure a dataflow endpoint for Azure Data Lake Storage Gen2, we suggest us name: adls spec: endpointType: DataLakeStorage- datalakeStorageSettings: - host: <account>.blob.core.windows.net + dataLakeStorageSettings: + host: https://<account>.blob.core.windows.net authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings: {} If you need to override the system-assigned managed identity audience, see the [ name: adls spec: endpointType: DataLakeStorage- datalakeStorageSettings: - host: <account>.blob.core.windows.net + dataLakeStorageSettings: + host: https://<account>.blob.core.windows.net authentication: method: AccessToken accessTokenSettings: spec: - operationType: Destination destinationSettings: endpointRef: adls+ # dataDestination should be the storage container name dataDestination: telemetryTable ``` Before creating the dataflow endpoint, assign a role to the managed identity tha In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. Not specifying an audience creates a managed identity with the default audience scoped to your storage account. ```yaml-datalakeStorageSettings: +dataLakeStorageSettings: authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings: {} datalakeStorageSettings: If you need to override the system-assigned managed identity audience, you can specify the `audience` setting. ```yaml-datalakeStorageSettings: +dataLakeStorageSettings: authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings: kubectl create secret generic my-sas \ Create the *DataflowEndpoint* resource with the secret reference. ```yaml-datalakeStorageSettings: +dataLakeStorageSettings: authentication: method: AccessToken accessTokenSettings: datalakeStorageSettings: To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity. ```yaml-datalakeStorageSettings: +dataLakeStorageSettings: authentication: method: UserAssignedManagedIdentity userAssignedManagedIdentitySettings: For example, to configure the maximum number of messages to 1000 and the maximum Set the values in the dataflow endpoint custom resource. ```yaml-datalakeStorageSettings: +dataLakeStorageSettings: batching: latencySeconds: 100 maxMessages: 1000 |
iot-operations | Howto Enable Secure Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-enable-secure-settings.md | This article provides instructions for enabling secure settings if you didn't do * Azure CLI installed on your development machine. This scenario requires Azure CLI version 2.64.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli). -* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version: +* The latest versions of the following extensions for Azure CLI: ```azurecli az extension add --upgrade --name azure-iot-ops+ az extension add --upgrade --name connectedk8s ``` ## Configure cluster for workload identity az connectedk8s show --name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --q Use the following steps to enable workload identity on an existing connected K3s cluster: -1. Remove the existing connected k8s cli if any - ```azurecli - az extension remove --name connectedk8s - ``` --1. Download and install a preview version of the `connectedk8s` extension for Azure CLI. -- ```azurecli - curl -L -o connectedk8s-1.10.0-py2.py3-none-any.whl https://github.com/AzureArcForKubernetes/azure-cli-extensions/raw/refs/heads/connectedk8s/public/cli-extensions/connectedk8s-1.10.0-py2.py3-none-any.whl - az extension add --upgrade --source connectedk8s-1.10.0-py2.py3-none-any.whl - ``` - 1. Use the [az connectedk8s update](/cli/azure/connectedk8s#az-connectedk8s-update) command to enable the workload identity feature on the cluster. ```azurecli |
iot-operations | Howto Prepare Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md | To prepare your Azure Arc-enabled Kubernetes cluster, you need: * Azure CLI version 2.64.0 or newer installed on your development machine. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli). -* The latest version of the Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version: +* The latest version of the following extensions for Azure CLI: ```bash az extension add --upgrade --name azure-iot-ops+ az extension add --upgrade --name connectedk8s ``` * Hardware that meets the system requirements: To connect your cluster to Azure Arc: ```azurecli az group create --location $LOCATION --resource-group $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID ```-1. Remove the existing connected k8s cli if any - ```azurecli - az extension remove --name connectedk8s - ``` --1. Download and install a preview version of the `connectedk8s` extension for Azure CLI. -- ```azurecli - curl -L -o connectedk8s-1.10.0-py2.py3-none-any.whl https://github.com/AzureArcForKubernetes/azure-cli-extensions/raw/refs/heads/connectedk8s/public/cli-extensions/connectedk8s-1.10.0-py2.py3-none-any.whl - az extension add --upgrade --source connectedk8s-1.10.0-py2.py3-none-any.whl - ``` 1. Use the [az connectedk8s connect](/cli/azure/connectedk8s#az-connectedk8s-connect) command to Arc-enable your Kubernetes cluster and manage it as part of your Azure resource group: |
iot-operations | Howto Configure Opcua Certificates Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-configure-opcua-certificates-infrastructure.md | A deployed instance of Azure IoT Operations Preview. To deploy Azure IoT Operati ## Configure a self-signed application instance certificate -The default deployment of the connector for OPC UA installs all the resources needed by [cert-manager](https://cert-manager.io/) to create an OPC UA compliant self-signed certificate. This certificate is stored in the `aio-opc-opcuabroker-default-application-cert` secret. This secret is mapped into all the connector for OPC UA pods and acts as the OPC UA client application instance certificate. `cert-manager` handles the automatic renewal of this application instance certificate. +The default deployment of the connector for OPC UA installs all the resources needed by [cert-manager](https://cert-manager.io/) to create an OPC UA compliant certificate. A self-signed CA is used to sign this certificate. The application instance certificate is stored in the `aio-opc-opcuabroker-default-application-cert` secret while the CA certificate is stored in `aio-opc-opcuabroker-default-root-ca-cert` secret. The `aio-opc-opcuabroker-default-application-cert` secret is mapped into all the connector for OPC UA pods and acts as the OPC UA client application instance certificate. `cert-manager` handles the automatic renewal of both the application instance certificate and the self signed CA. This configuration is typically sufficient for compliant and secure communication between your OPC UA servers and the connector for OPC UA in a demonstration or exploration environment. For a production environment, use enterprise grade application instance certificates in your deployment. If your OPC UA server uses a certificate issued by a CA, but you don't want to t ## Configure your OPC UA server -To complete the configuration of the application authentication mutual trust, you need to configure your OPC UA server to trust the connector for OPC UA application instance certificate: +To complete the configuration of the application authentication mutual trust, you need to configure your OPC UA server to trust the connector for OPC UA application instance certificate together with its issuer trust chain: -1. To extract the connector for OPC UA certificate into a `opcuabroker.crt` file, run the following command: +1. To extract the public key certificate for the OPC UA connector into a `opcuabroker.crt` file, run the following command: # [Bash](#tab/bash) To complete the configuration of the application authentication mutual trust, yo -1. Many OPC UA servers only support certificates in the DER format. If necessary, use the following command to convert the _opcuabroker.crt_ certificate to _opcuabroker.der_: +1. To extract the CA public key certificate for the OPC UA connector into a `opcuabroker-ca.crt` file, run the following command: ++ # [Bash](#tab/bash) ++ ```bash + kubectl -n azure-iot-operations get secret aio-opc-opcuabroker-default-application-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > opcuabroker-ca.crt + ``` ++ # [PowerShell](#tab/powershell) ++ ```powershell + kubectl -n azure-iot-operations get secret aio-opc-opcuabroker-default-application-cert -o jsonpath='{.data.ca\.crt}' | %{ [Text.Encoding]::UTF8.GetString([Convert]::FromBase64String($_)) } > opcuabroker-ca.crt + ``` ++ ++1. Many OPC UA servers only support certificates in the DER format. If necessary, use the following command to convert the _opcuabroker.crt_ and _opcuabroker-ca.crt_ certificates to _opcuabroker.der_ and _opcuabroker-ca.der_: ```bash openssl x509 -outform der -in opcuabroker.crt -out opcuabroker.der+ openssl x509 -outform der -in opcuabroker-ca.crt -out opcuabroker-ca.der ``` -1. Consult the documentation of your OPC UA server to learn how to add the `opcuabroker.crt` or `opcuabroker.der` certificate file to the server's trusted certificates list. +1. Consult the documentation of your OPC UA server to learn how to add the `opcuabroker.crt` or `opcuabroker.der` certificate file to the server's trusted certificates list, and the `opcuabroker-ca.crt` or `opcuabroker-ca.der` CA certificate file into the server's trusted issuers list. ## Configure an enterprise grade application instance certificate |
iot-operations | Quickstart Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-deploy.md | The rest of the quickstarts in this end-to-end series build on this one to defin If you want to deploy Azure IoT Operations to a local cluster such as Azure Kubernetes Service Edge Essentials or K3s on Ubuntu, see [Deployment details](../deploy-iot-ops/overview-deploy.md). +> [!IMPORTANT] +> If you're upgrading your public preview from version 0.6.0 to version 0.7.0, you must uninstall the previous version before deploying the new version. For more information, see [Update Azure IoT Operations](../deploy-iot-ops/howto-manage-update-uninstall.md#update). + ## Before you begin This series of quickstarts is intended to help you get started with Azure IoT Operations as quickly as possible so that you can evaluate an end-to-end scenario. In a true development or production environment, multiple teams working together perform these tasks and some tasks might require elevated permissions. |
iot-operations | Overview Iot Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/overview-iot-operations.md | Title: What is Azure IoT Operations? -description: Azure IoT Operations is a unified data plane for the edge. It's composed of various data services that run on Azure Arc-enabled edge Kubernetes clusters. +description: Azure IoT Operations is a unified data plane for the edge. It's a collection of various data services that run on Azure Arc-enabled edge Kubernetes clusters. Last updated 07/31/2024 [!INCLUDE [public-preview-note](includes/public-preview-note.md)] -_Azure IoT Operations Preview_ is a unified data plane for the edge. It's composed of a set of modular, scalable, and highly available data services that run on Azure Arc-enabled edge Kubernetes clusters such as [AKS Edge Essentials](#validated-environments). It enables data capture from various different systems and integrates with data modeling applications such as Microsoft Fabric to help organizations deploy the industrial metaverse. +_Azure IoT Operations Preview_ is a unified data plane for the edge. It's a collection of modular, scalable, and highly available data services that run on Azure Arc-enabled edge Kubernetes clusters such as [AKS Edge Essentials](#validated-environments). It enables data capture from various different systems and integrates with data modeling applications such as Microsoft Fabric to help organizations deploy the industrial metaverse. Azure IoT Operations: * Is built from ground up by using Kubernetes native applications. * Includes an industrial-grade, edge-native MQTT broker that powers event-driven architectures. * Is highly extensible, scalable, resilient, and secure.-* Lets you manage all edge services from the cloud by using Azure Arc. +* Lets you manage edge services and resources from the cloud by using Azure Arc. * Can integrate customer workloads into the platform to create a unified solution. * Supports GitOps configuration as code for deployment and updates. * Natively integrates with [Azure Event Hubs](../event-hubs/azure-event-hubs-kafka-overview.md), [Azure Event Grid's MQTT broker](../event-grid/mqtt-overview.md), and [Microsoft Fabric](/fabric/) in the cloud. There are two core elements in the Azure IoT Operations Preview architecture: * **Azure IoT Operations Preview**. The set of data services that run on Azure Arc-enabled edge Kubernetes clusters. It includes the following * The _MQTT broker_ is an edge-native MQTT broker that powers event-driven architectures. * The _connector for OPC UA_ handles the complexities of OPC UA communication with OPC UA servers and other leaf devices.-* The _operations experience_ is a web UI that provides a unified experience for operational technologists to manage assets and dataflows in an Azure IoT Operations deployment. An IT administrator can use [Azure Arc site manager (preview)](/azure/azure-arc/site-manager/overview) to group Azure IoT Operations instances by physical location and make it easier for OT users to find instances. + * _Dataflows_ provide data transformation and data contextualization capabilities and enable you to route messages to various locations including cloud endpoints. +* The _operations experience_ is a web UI that provides a unified experience for operational technologists (OT) to manage assets and dataflows in an Azure IoT Operations deployment. An IT administrator can use [Azure Arc site manager (preview)](/azure/azure-arc/site-manager/overview) to group Azure IoT Operations instances by physical location and make it easier for OT users to find instances. ## Deploy Azure IoT Operations can connect to various industrial devices and assets. You c The [connector for OPC UA](discover-manage-assets/overview-opcua-broker.md) manages the connection to OPC UA servers and other leaf devices. The connector for OPC UA publishes data from the OPC UA servers to MQTT broker topics. +Azure IoT Operations uses the Azure Device Registry to store information about local assets in the cloud. The service enables you to [manage assets on the edge from the Azure portal or the Azure CLI](discover-manage-assets/howto-secure-assets.md). The Azure Device Registry also includes a schema registry for the assets. Dataflows use these schemas to deserialize and serialize messages. + ## Automatic asset discovery Automatic asset discovery using Akri services is not available in the current version of Azure IoT Operations. To learn more, see the [Release notes](https://github.com/Azure/azure-iot-operations/releases) for the current version. The [MQTT broker](manage-mqtt-broker/overview-iot-mq.md) runs on the edge. It le Examples of how components in Azure IoT Operations use the MQTT broker include: * The connector for OPC UA publishes data from OPC UA servers and other leaf devices to MQTT topics.-* Dataflows subscribe to MQTT topics to retrieve messages for processing. -* Northbound cloud connectors subscribe to MQTT topics to fetch messages for forwarding to cloud services. +* Dataflows subscribe to MQTT topics to retrieve messages for processing before sending them to cloud endpoints. ## Connect to the cloud -To connect to the cloud from Azure IoT Operations, you have the following options: --The northbound cloud connectors let you connect the MQTT broker directly to cloud services such as: +To connect to the cloud from Azure IoT Operations, you can use the following dataflow destination endpoints: -* [MQTT brokers](connect-to-cloud/howto-configure-mqtt-bridge.md) -* [Azure Event Hubs or Kafka](connect-to-cloud/howto-configure-kafka.md) -* [Azure Data Lake Storage](connect-to-cloud/howto-configure-data-lake.md) +* [Azure Event Grid and other cloud-based MQTT brokers](connect-to-cloud/howto-configure-mqtt-endpoint.md) +* [Azure Event Hubs or Kafka](connect-to-cloud/howto-configure-kafka-endpoint.md) +* [Azure Data Lake Storage](connect-to-cloud/howto-configure-adlsv2-endpoint.md) +* [Microsoft Fabric OneLake](connect-to-cloud/howto-configure-fabric-endpoint.md) +* [Azure Data Explorer](connect-to-cloud/howto-configure-adx-endpoint.md) ## Process data -In Azure IoT operations v0.6.0, the data processor was replaced by [dataflows](./connect-to-cloud/overview-dataflow.md). Dataflows provide enhanced data transformation and data contextualization capabilities within Azure IoT Operations. Dataflows can use schemas stored in the schema registry to deserialize and serialize messages. +[Dataflows](connect-to-cloud/overview-dataflow.md) provide enhanced data transformation and data contextualization capabilities within Azure IoT Operations. Dataflows can use schemas stored in the schema registry to deserialize and serialize messages. > [!NOTE] > If you want to continue using the data processor, you must deploy Azure IoT Operations v0.5.1 with the additional flag to include data processor component. It's not possible to deploy the data processor with Azure IoT Operations v0.6.0 or newer. The Azure IoT operations CLI extension that includes the flag for deploying the data processor is version 0.5.1b1. This version requires Azure CLI v2.46.0 or greater. The data processor documentation is currently available on the previous versions site: [Azure IoT Operations data processor](/previous-versions/azure/iot-operations/process-data/overview-data-processor). |
operator-nexus | Concepts Cross Subscription Deployments Required Rbac For Network Fabric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-cross-subscription-deployments-required-rbac-for-network-fabric.md | + + Title: Azure Operator Nexus - Cross-subscription deployments and required permissions for Network Fabric +description: Cross-subscription deployments and required permissions for Network Fabric ++++ Last updated : 09/17/2024++++# Cross-subscription deployments and required permissions for Network Fabric ++This article outlines the requirements and behaviors associated with managing Nexus Network Fabric (NNF) resources in Azure across multiple subscriptions, along with the implementation of the linked access check. This check ensures that the necessary permissions and access controls are in place when managing resources across different subscriptions. ++## Subscription context and user permissions ++Consider two Azure subscriptions, **Subscription A** and **Subscription B**, where users interact with NNF resources. The permissions assigned to users in each subscription determine their ability to manage these resources effectively. ++**Subscription A:** This subscription hosts the primary NNF resources. Depending on the userΓÇÖs permissions, access levels can vary from read-only to full control. ++**Subscription B:** This subscription is used for creating and managing NNF resources that may reference resources from **Subscription A**. ++## Scenarios ++### Limited access in subscription ++In this scenario, the user has access to two subscriptions: **Subscription A** and **Subscription B**. In **Subscription A**, the user has only `Read` access to the Network Fabric (NNF) resources. ++**Outcome:** When the user tries to create or manage any NNF resource in **Subscription B** by referencing the NNF resource from **Subscription A**, the operation fails with a `LinkedAuthorizationFailed` error. This failure occurs because the user does not have the necessary `Join` access to the NNF resource. ++### Sufficient access in both subscriptions ++In this scenario, the user has access to both **Subscription A** and **Subscription B**, with either `Contributor` or `Owner` permissions in both subscriptions. ++**Outcome:** When the user tries to create or manage Network Fabric (NNF) resources in **Subscription B** by referencing NNF resources in **Subscription A**, the operation succeeds. This confirms that sufficient permissions enable successful resource management across subscriptions. ++### No access to subscription ++In this scenario, the user has no access to **Subscription A**, where the Network Fabric (NNF) resources are deployed, but has Contributor or Owner rights in **Subscription B**. ++**Outcome:** When the user tries to create or manage NNF resources in **Subscription B** by referencing NNF resources in **Subscription A**, the operation fails with an AuthorizationFailed error. This occurs because the user lacks either the required Read access to **Subscription A** along with Join access to the referenced resource, or Write access to **Subscription A** along with Join access to the referenced resource. ++>[!NOTE] +>Network Fabric cannot be created in a different subscription than the referenced Network Fabric Controller (NFC). ++## Key considerations ++To effectively manage NNF resources across Azure subscriptions, users must have the appropriate permissions. The following permissions are essential: ++### Permission management ++#### Subscription-level permissions ++- **Read access:** Users must have read access to view NNF resources within the subscription. ++- **Contributor access:** Users can create and manage resources, including configuring settings and deleting resources. ++- **Owner access:** Users have full control over the subscription, including the ability to manage permissions for other users. ++#### Resource-level permissions ++- **Join access:** Users must have Join access to the specific NNF resources they wish to reference. For example, when a user tries to create an L2 or L3 isolation domain in **Subscription B** while referencing an NNF resource in **Subscription A**, the user must have Join access on the NNF resource. ++### Resource management ++#### Resource creation ++- Ensure that users have the necessary subscription-level permissions before attempting to create NNF resources. ++- When referencing resources from another subscription, confirm that the user has both read access to that subscription and Join access to the specific NNF resource. ++#### Resource configuration ++- Users with 'Contributor` or `Owner` access can configure NNF resources. However, they must have the appropriate permissions for each specific configuration action. ++#### Resource deletion ++- Deleting NNF resources typically requires `Contributor`, `Owner` or `Delete` access on the resource. Users should be aware of any dependencies that may prevent deletion. ++### Cross-Subscription management ++- When managing NNF resources across multiple subscriptions, itΓÇÖs crucial to maintain a clear understanding of the permissions structure to avoid `AuthorizationFailed` and `LinkedAuthorizationFailed` errors. |
oracle | Oracle Database Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/oracle-database-regions.md | Learn what Azure regions offer Oracle Database@Azure. | Azure region | OCI region | Oracle Exadata Database@Azure | Oracle Autonomous Database@Azure | |-|--|-|-|-| Australia East | Australia East (Sydney) | Γ£ô | | +| Australia East | Australia East (Sydney) | Γ£ô | Γ£ô | ## Europe, Middle East, Africa (EMEA) Learn what Azure regions offer Oracle Database@Azure. | France Central |France central (Paris) | Γ£ô | Γ£ô | | Germany West Central |Germany Central (Frankfurt) | Γ£ô | Γ£ô | | UK South | UK South (London) | Γ£ô | Γ£ô |+| Italy North | Italy North (Milan) | Γ£ô | | ## North America (NA) |
storage | Archive Cost Estimation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-cost-estimation.md | |
storage | Data Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-protection-overview.md | |
storage | Immutable Storage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md | |
storage | Secure File Transfer Protocol Support How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md | description: Learn how to enable SSH File Transfer Protocol (SFTP) support for A -+ Last updated 04/30/2024 |
storage | Container Storage Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-faq.md | |
storage | Install Container Storage Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md | By default, the system node pool is named `nodepool1`. If you want to enable Azu \*If there are any existing node pools with the `acstor.azure.com/io-engine:acstor` label, Azure Container Storage will install the data plane components by default. Otherwise, users have the option to pass the preferred node pool to `acstor` through Azure CLI. If the cluster only has the system node pool, it will be labeled and used for Azure Container Storage by default. It's important to note that only data plane components will be restricted to the labeled node pool. The control plane components of Azure Container Storage aren't limited to the labeled nodes and may be installed on the system node pool as well. ```azurecli-interactive-az aks create -n <cluster-name> -g <resource-group> --node-vm-size Standard_D4s_v3 --node-count 3 --enable-azure-container-storage <storage-pool-type> +az aks create -n <cluster-name> -g <resource-group> --node-vm-size Standard_D4s_v3 --node-count 3 --enable-azure-container-storage <storage-pool-type> --generate-ssh-keys ``` The deployment will take 10-15 minutes. When it completes, you'll have an AKS cluster with Azure Container Storage installed, the components for your chosen storage pool type enabled, and a default storage pool. If you want to enable additional storage pool types to create additional storage pools, see [Enable additional storage pool types](container-storage-aks-quickstart.md#enable-additional-storage-pool-types). |
storage | Files Data Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-data-protection-overview.md | Title: Data protection overview for Azure Files description: Learn how to protect your data in Azure Files. Understand the concepts and processes involved with backup and recovery of Azure file shares. -+ Last updated 08/04/2024 |
storage | Files Nfs Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md | The status of items that appear in this table might change over time as support ## Regional availability +NFS Azure file shares are supported in all the same regions that support premium file storage. See [Azure products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=storage®ions=all). ## Performance -NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned model](understanding-billing.md#provisioned-v1-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress might face additional latencies due to the high number of open and close operations. +NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned v1 model](understanding-billing.md#provisioned-v1-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress might face additional latencies due to the high number of open and close operations. > [!NOTE] > You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md). |
storage | Storage Files Active Directory Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md | Title: Overview - Azure Files identity-based authentication description: Azure Files supports identity-based authentication over SMB (Server Message Block) with Active Directory Domain Services (AD DS), Microsoft Entra Domain Services, and Microsoft Entra Kerberos for hybrid identities. -+ Last updated 10/03/2024 |
storage | Storage Files Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md | |
storage | Storage Snapshots Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md | Title: Overview of Azure Files share snapshots description: A share snapshot is a read-only version of an Azure file share that's taken as a point in time copy, as a way to back up the share. -+ Last updated 06/24/2024 |
storage | Storage Blobs Container Calculate Billing Size Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-billing-size-powershell.md | - Title: Azure PowerShell script sample - Calculate the total billing size of a blob container -description: Calculate the total size of a container in Azure Blob storage for billing purposes. ------ Previously updated : 01/19/2023----# Calculate the total billing size of a blob container --This script calculates the size of a container in Azure Blob storage for the purpose of estimating billing costs. The script totals the size of the blobs in the container. --> [!IMPORTANT] -> The sample script provided in this article may not accurately calculate the billing size for blob snapshots. ----> [!NOTE] -> This PowerShell script calculates the size of a container for billing purposes. If you are calculating container size for other purposes, see [Calculate the total size of a Blob storage container](../scripts/storage-blobs-container-calculate-size-powershell.md) for a simpler script that provides an estimate. --## Determine the size of the blob container --The total size of the blob container includes the size of the container itself and the size of all blobs under the container. --The following sections describes how the storage capacity is calculated for blob containers and blobs. In the following section, Len(X) means the number of characters in the string. --### Blob containers --The following calculation describes how to estimate the amount of storage that's consumed per blob container: --``` -48 bytes + Len(ContainerName) * 2 bytes + -For-Each Metadata[3 bytes + Len(MetadataName) + Len(Value)] + -For-Each Signed Identifier[512 bytes] -``` --Following is the breakdown: --* 48 bytes of overhead for each container includes the Last Modified Time, Permissions, Public Settings, and some system metadata. --* The container name is stored as Unicode, so take the number of characters and multiply by two. --* For each block of blob container metadata that's stored, we store the length of the name (ASCII), plus the length of the string value. --* The 512 bytes per Signed Identifier includes signed identifier name, start time, expiry time, and permissions. --### Blobs --The following calculations show how to estimate the amount of storage consumed per blob. --* Block blob (base blob or snapshot): -- ``` - 124 bytes + Len(BlobName) * 2 bytes + - For-Each Metadata[3 bytes + Len(MetadataName) + Len(Value)] + - 8 bytes + number of committed and uncommitted blocks * Block ID Size in bytes + - SizeInBytes(data in unique committed data blocks stored) + - SizeInBytes(data in uncommitted data blocks) - ``` --* Page blob (base blob or snapshot): -- ``` - 124 bytes + Len(BlobName) * 2 bytes + - For-Each Metadata[3 bytes + Len(MetadataName) + Len(Value)] + - number of nonconsecutive page ranges with data * 12 bytes + - SizeInBytes(data in unique pages stored) - ``` --Following is the breakdown: --* 124 bytes of overhead for blob, which includes: - - Last Modified Time - - Size - - Cache-Control - - Content-Type - - Content-Language - - Content-Encoding - - Content-MD5 - - Permissions - - Snapshot information - - Lease - - Some system metadata --* The blob name is stored as Unicode, so take the number of characters and multiply by two. --* For each block of metadata that's stored, add the length of the name (stored as ASCII), plus the length of the string value. --* For the block blobs: - * 8 bytes for the block list. - * Number of blocks times the block ID size in bytes. - * The size of the data in all of the committed and uncommitted blocks. -- >[!NOTE] - >When snapshots are used, this size includes only the unique data for this base or snapshot blob. If the uncommitted blocks are not used after a week, they are garbage-collected. After that, they don't count toward billing. --* For page blobs: - * The number of nonconsecutive page ranges with data times 12 bytes. This is the number of unique page ranges you see when calling the **GetPageRanges** API. -- * The size of the data in bytes of all of the stored pages. -- >[!NOTE] - >When snapshots are used, this size includes only the unique pages for the base blob or the snapshot blob that's being counted. --## Sample script --[!code-powershell[main](../../../powershell_scripts/storage/calculate-container-size/calculate-container-size-ex.ps1 "Calculate container size")] --## Next steps --- See [Calculate the total size of a Blob storage container](../scripts/storage-blobs-container-calculate-size-powershell.md) for a simple script that provides an estimate of container size.--- For more information about Azure Storage billing, see [Understanding Windows Azure Storage Billing](https://blogs.msdn.microsoft.com/windowsazurestorage/2010/07/08/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity/).--- For more information about the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/).--- You can find additional Storage PowerShell script samples in [PowerShell samples for Azure Storage](../blobs/storage-samples-blobs-powershell.md). |
storage | Storage Blobs Container Calculate Size Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-powershell.md | - Title: Calculate the size of blob containers with PowerShell- -description: Calculate the size of all Azure Blob Storage containers in a storage account. ----- Previously updated : 12/04/2023-----# Calculate the size of blob containers with PowerShell --This script calculates the size of all Azure Blob Storage containers in a storage account. ----> [!IMPORTANT] -> This PowerShell script provides an estimated size for the containers in an account and should not be used for billing calculations. For a script that calculates container size for billing purposes, see [Calculate the size of a Blob storage container for billing purposes](../scripts/storage-blobs-container-calculate-billing-size-powershell.md). --## Sample script ---## Clean up deployment --Run the following command to remove the resource group, container, and all related resources. --```powershell -Remove-AzResourceGroup -Name bloblisttestrg -``` --## Script explanation --This script uses the following commands to calculate the size of the Blob storage container. Each item in the table links to command-specific documentation. --| Command | Notes | -||| -| [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) | Gets a specified Storage account or all of the Storage accounts in a resource group or the subscription. | -| [Get-AzStorageContainer](/powershell/module/az.storage/get-azstoragecontainer) | Lists the storage containers. | -| [Get-AzStorageBlob](/powershell/module/az.storage/Get-AzStorageBlob) | Lists blobs in a container. | --## Next steps --For a script that calculates container size for billing purposes, see [Calculate the size of a Blob storage container for billing purposes](../scripts/storage-blobs-container-calculate-billing-size-powershell.md). --For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/). --Find more PowerShell script samples in [PowerShell samples for Azure Storage](../blobs/storage-samples-blobs-powershell.md). |
storage | Storage Blobs Container Delete By Prefix Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-powershell.md | - Title: Azure PowerShell Script Sample - Delete containers by prefix -description: Read an example that shows how to delete Azure Blob storage based on a prefix in the container name, using Azure PowerShell. ----- Previously updated : 06/13/2017-----# Delete containers based on container name prefix --This script deletes containers in Azure Blob storage based on a prefix in the container name. ----## Sample script --[!code-powershell[main](../../../powershell_scripts/storage/delete-containers-by-prefix/delete-containers-by-prefix.ps1 "Delete containers by prefix")] --## Clean up deployment --Run the following command to remove the resource group, remaining containers, and all related resources. --```powershell -Remove-AzResourceGroup -Name containerdeletetestrg -``` --## Script explanation --This script uses the following commands to delete containers based on container name prefix. Each item in the table links to command-specific documentation. --| Command | Notes | -||| -| [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) | Gets a specified Storage account or all of the Storage accounts in a resource group or the subscription. | -| [Get-AzStorageContainer](/powershell/module/az.storage/Get-AzStorageContainer) | Lists the storage containers associated with a storage account. | -| [Remove-AzStorageContainer](/powershell/module/az.storage/Remove-AzStorageContainer) | Removes the specified storage container. | --## Next steps --For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/). --Additional storage PowerShell script samples can be found in [PowerShell samples for Azure Blob storage](../blobs/storage-samples-blobs-powershell.md). |
storage | Storage Common Rotate Account Keys Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-powershell.md | - Title: Rotate storage account access keys with PowerShell- -description: Create an Azure Storage account, then retrieve and rotate one of its account access keys. ----- Previously updated : 12/04/2019-----# Rotate storage account access keys with PowerShell --This script creates an Azure Storage account, displays the new storage account's primary access key, then renews (rotates) the key. ----## Sample script --[!code-powershell[main](../../../powershell_scripts/storage/rotate-storage-account-keys/rotate-storage-account-keys.ps1 "Rotate storage account keys")] --## Clean up deployment --Run the following command to remove the resource group, storage account, and all related resources. --```powershell -Remove-AzResourceGroup -Name rotatekeystestrg -``` --## Script explanation --This script uses the following commands to create the storage account and retrieve and rotate one of its access keys. Each item in the table links to command-specific documentation. --| Command | Notes | -||| -| [Get-AzLocation](/powershell/module/az.resources/get-azlocation) | Gets all locations and the supported resource providers for each location. | -| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates an Azure resource group. | -| [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) | Creates a Storage account. | -| [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) | Gets the access keys for an Azure Storage account. | -| [New-AzStorageAccountKey](/powershell/module/az.storage/new-azstorageaccountkey) | Regenerates an access key for an Azure Storage account. | --## Next steps --For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/). --Additional storage PowerShell script samples can be found in [PowerShell samples for Azure Blob storage](../blobs/storage-samples-blobs-powershell.md). |
update-manager | Guidance Migration Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-azure.md | As a first step in MCM user's journey towards Azure Update Manager, you need to ### Prerequisites for Azure Update Manager and MCM co-existence -- Ensure that the Auto updates are disabled on the machine. For more information, see [Manage additional Windows Update- Windows Deployment](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry).+- Ensure that the Auto updates are disabled on the machine. For more information, see [Manage additional Windows Update settings - Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry). - Ensure that the registry path *HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU, NoAutoUpdate* is set to 1. + Ensure that the **NoAutoUpdate** registry key is set to 1 in the following registry path: `HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU` - Azure Update Manager can get updates from WSUS server and for this, ensure to configure WSUS server as part of SCCM. |
virtual-desktop | Configure Rdp Shortpath | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-rdp-shortpath.md | description: Learn how to configure RDP Shortpath for Azure Virtual Desktop, whi Previously updated : 06/18/2024 Last updated : 10/03/2024 # Configure RDP Shortpath for Azure Virtual Desktop There are four options for RDP Shortpath that provide flexibility for how you wa - **RDP Shortpath for public networks with ICE/STUN**: A *direct* UDP connection between a client device and session host using a public connection. ICE/STUN is used to discover available IP addresses and a dynamic port that can be used for a connection. The RDP Shortpath listener and an inbound port aren't required. The port range is configurable. -- **RDP Shortpath for public networks via TURN**: An *indirect* UDP connection between a client device and session host using a public connection where TURN relays traffic through an intermediate server between a client and session host. An example of when you use this option is if a connection uses Symmetric NAT. A dynamic port is used for a connection; the port range is configurable. For a list of Azure regions that TURN is available, see [supported Azure regions with TURN availability](rdp-shortpath.md?tabs=public-networks#turn-availability). The connection from the client device must also be within a supported location. The RDP Shortpath listener and an inbound port aren't required.+- **RDP Shortpath for public networks via TURN**: A *relayed* UDP connection between a client device and session host using a public connection where TURN relays traffic through an intermediate server between a client and session host. An example of when you use this option is if a connection uses Symmetric NAT. A dynamic port is used for a connection; the port range is configurable. For a list of Azure regions that TURN is available, see [supported Azure regions with TURN availability](rdp-shortpath.md?tabs=public-networks#turn-relay-availability). The connection from the client device must also be within a supported location. The RDP Shortpath listener and an inbound port aren't required. Which of the four options your client devices can use is also dependent on their network configuration. To learn more about how RDP Shortpath works, together with some example scenarios, see [RDP Shortpath](rdp-shortpath.md). Before you enable RDP Shortpath, you need: - [Windows App](/windows-app/get-started-connect-devices-desktops-apps?pivots=azure-virtual-desktop) on the following platforms: - Windows - macOS- - iOS and iPadOS + - iOS/iPadOS + - Android/Chrome OS (preview) - [Remote Desktop app](users/remote-desktop-clients-overview.md) on the following platforms: - Windows, version 1.2.3488 or later - macOS- - iOS and iPadOS - - Android (preview only) + - iOS/iPadOS + - Android/Chrome OS - For **RDP Shortpath for managed networks**, you need direct connectivity between the client and the session host. This means that the client can connect directly to the session host on port 3390 (default) and isn't blocked by firewalls (including the Windows Firewall) or a Network Security Group. Examples of a managed network are [ExpressRoute private peering](../expressroute/expressroute-circuit-peerings.md) or a site-to-site or point-to-site VPN (IPsec), such as [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). -- Internet access for both clients and session hosts. Session hosts require outbound UDP connectivity from your session hosts to the internet or connections to STUN and TURN servers. To reduce the number of ports required, you can [limit the port range used with STUN and TURN](configure-rdp-shortpath.md#limit-the-port-range-used-with-stun-and-turn).+- For **RDP Shortpath for public networks**, you need: ++ - Internet access for both clients and session hosts. Session hosts require outbound UDP connectivity from your session hosts to the internet or connections to STUN and TURN servers. To reduce the number of ports required, you can [limit the port range used with STUN and TURN](configure-rdp-shortpath.md#limit-the-port-range-used-with-stun-and-turn). ++ - Make sure session hosts and clients can connect to the STUN and TURN servers. You can find details of the IP subnets, ports, and protocols used by the STUN and TURN servers at [Network configuration](rdp-shortpath.md#network-configuration). - If you want to use Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md). Here's how to configure RDP Shortpath in the host pool networking settings using | PowerShell Parameter | RDP Shortpath option | 'Default' meaning | |--|--|--|- | ManagedPrivateUdp | RDP Shortpath for managed networks | Enabled | - | DirectUdp | RDP Shortpath for managed networks with ICE/STUN | Enabled | - | PublicUdp | RDP Shortpath for public networks with ICE/STUN | Enabled | - | RelayUdp | RDP Shortpath for public networks via TURN | Enabled | + | `ManagedPrivateUdp` | RDP Shortpath for managed networks | Enabled | + | `DirectUdp` | RDP Shortpath for managed networks with ICE/STUN | Enabled | + | `PublicUdp` | RDP Shortpath for public networks with ICE/STUN | Enabled | + | `RelayUdp` | RDP Shortpath for public networks via TURN | Enabled | 3. Use the `Update-AzWvdHostPool` cmdlet with the following examples to configure RDP Shortpath. Here's how to configure RDP Shortpath in the host pool networking settings using -- ## Check that UDP is enabled on Windows client devices For Windows client devices, UDP is enabled by default. To check in the Windows registry to verify that UDP is enabled: You have access to TURN servers and your NAT type appears to be 'cone shaped'. Shortpath for public networks is very likely to work on this host. ``` -If your environment uses Symmetric NAT, then you can use an indirect connection with TURN. For more information you can use to configure firewalls and Network Security Groups, see [Network configurations for RDP Shortpath](rdp-shortpath.md?tabs=public-networks#network-configuration). +If your environment uses Symmetric NAT, then you can use a relayed connection with TURN. For more information you can use to configure firewalls and Network Security Groups, see [Network configurations for RDP Shortpath](rdp-shortpath.md?tabs=public-networks#network-configuration). ## Optional: Enable Teredo support The possible values are: - **2** - The connection is using RDP Shortpath for public networks directly using STUN. -- **4** - The connection is using RDP Shortpath for public networks indirectly using TURN.+- **4** - The connection is using RDP Shortpath for public networks and relayed using TURN. For any other value, the connection isn't using UDP and is connected using TCP instead. |
virtual-desktop | Rdp Shortpath | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md | -RDP Shortpath establishes a direct UDP-based transport between a local device Windows App or the Remote Desktop app on supported platforms and session host in Azure Virtual Desktop. +RDP Shortpath establishes a UDP-based transport between a local device Windows App or the Remote Desktop app on supported platforms and session host in Azure Virtual Desktop. By default, the Remote Desktop Protocol (RDP) begins a TCP-based reverse connect transport, then tries to establish a remote session using UDP. If the UDP connection succeeds the TCP connection drops, otherwise the TCP connection is used as a fallback connection mechanism. -By default, the Remote Desktop Protocol (RDP) tries to establish a remote session using UDP and uses a TCP-based reverse connect transport as a fallback connection mechanism. UDP-based transport offers better connection reliability and more consistent latency. TCP-based reverse connect transport provides the best compatibility with various networking configurations and has a high success rate for establishing RDP connections. +UDP-based transport offers better connection reliability and more consistent latency. TCP-based reverse connect transport provides the best compatibility with various networking configurations and has a high success rate for establishing RDP connections. RDP Shortpath can be used in two ways: -1. **Managed networks**, where direct connectivity is established between the client and the session host when using a private connection, such as a virtual private network (VPN). A connection using a managed network is established in one of the following ways: +1. **Managed networks**, where direct connectivity is established between the client and the session host when using a private connection, such as [Azure ExpressRoute](../expressroute/expressroute-introduction.md) or a site-to-site virtual private network (VPN). A connection using a managed network is established in one of the following ways: 1. A *direct* UDP connection between the client device and session host, where you need to enable the RDP Shortpath listener and allow an inbound port on each session host to accept connections. RDP Shortpath can be used in two ways: 1. A *direct* UDP connection using the Simple Traversal Underneath NAT (STUN) protocol between a client and session host. - 1. An *indirect* UDP connection using the Traversal Using Relay NAT (TURN) protocol with a relay between a client and session host. + 1. An *relayed* UDP connection using the Traversal Using Relay NAT (TURN) protocol between a client and session host. The transport used for RDP Shortpath is based on the [Universal Rate Control Protocol (URCP)](https://www.microsoft.com/research/publication/urcp-universal-rate-control-protocol-for-real-time-communication-applications/). URCP enhances UDP with active monitoring of the network conditions and provides fair and full link utilization. URCP operates at low delay and loss levels as needed. Using RDP Shortpath has the following key benefits: - Using URCP to enhance UDP achieves the best performance by dynamically learning network parameters and providing the protocol with a rate control mechanism. -- The removal of extra relay points reduces round-trip time, which improves connection reliability and user experience with latency-sensitive applications and input methods.+- Higher throughput. ++- When using STUN, the removal of extra relay points reduces round-trip time improves connection reliability and the user experience with latency-sensitive applications and input methods. - In addition, for managed networks: If your users have both RDP Shortpath for managed network and public networks av # [Public networks](#tab/public-networks) -To provide the best chance of a UDP connection being successful when using a public connection, there are the *direct* and *indirect* connection types: +To provide the best chance of a UDP connection being successful when using a public connection, there are the *direct* and *relayed* connection types: - **Direct connection**: STUN is used to establish a direct UDP connection between a client and session host. To establish this connection, the client and session host must be able to connect to each other through a public IP address and negotiated port. However, most clients don't know their own public IP address as they sit behind a [Network Address Translation (NAT)](#network-address-translation-and-firewalls) gateway device. STUN is a protocol for the self-discovery of a public IP address from behind a NAT gateway device and the client to determine its own public-facing IP address. - For a client to use STUN, its network must allow UDP traffic. Assuming both the client and session host can route to the other's discovered IP address and port directly, communication is established with direct UDP over the WebSocket protocol. If firewalls or other network devices block direct connections, an indirect UDP connection will be tried. + For a client to use STUN, its network must allow UDP traffic. Assuming both the client and session host can route to the other's discovered IP address and port directly, communication is established with direct UDP over the WebSocket protocol. If firewalls or other network devices block direct connections, a relayed UDP connection is tried. -- **Indirect connection**: TURN is used to establish an indirect connection, relaying traffic through an intermediate server between a client and session host when a direct connection isn't possible. TURN is an extension of STUN. Using TURN means the public IP address and port is known in advance, which can be allowed through firewalls and other network devices.+- **Relayed connection**: TURN is used to establish a connection, relaying traffic through an intermediate server between a client and session host when a direct connection isn't possible. TURN is an extension of STUN. Using TURN means the public IP address and port is known in advance, which can be allowed through firewalls and other network devices. - TURN typically authorizes access to the server via username/password and its preferred mode of operation is to use UDP sockets. If firewalls or other network devices block UDP traffic, the connection will fall back to a TCP-based reverse connect transport. + If firewalls or other network devices block UDP traffic, the connection will fall back to a TCP-based reverse connect transport. When a connection is being established, Interactive Connectivity Establishment (ICE) coordinates the management of STUN and TURN to optimize the likelihood of a connection being established, and ensure that precedence is given to preferred network communication protocols. Each RDP session uses a dynamically assigned UDP port from an ephemeral port ran The following diagram gives a high-level overview of the network connections when using RDP Shortpath for public networks where session hosts joined to Microsoft Entra ID. ++#### TURN relay availability ++TURN relay is available in the following Azure regions: ++ :::column::: + - Australia Southeast + - Central India + - East US + - East US 2 + - France Central + - Japan West + - North Europe + :::column-end::: + :::column::: + - South Central US + - Southeast Asia + - UK South + - UK West + - West Europe + - West US + - West US 2 + :::column-end::: ++A TURN relay is selected based on the physical location of the client device. For example, if a client device is in the UK, the TURN relay in the UK South or UK West region is selected. If a client device is far from a TURN relay, the UDP connection might fall back to TCP. ### Network Address Translation and firewalls Because of IP packet modification, the recipient of the traffic will see the pub NAT is applicable to the Azure Virtual Networks where all session hosts reside. When a session host tries to reach the network address on the Internet, the NAT Gateway (either your own or default provided by Azure), or Azure Load Balancer performs the address translation. For more information about various types of Source Network Address Translation, see [Use Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md). -Most networks typically include firewalls that inspect traffic and block it based on rules. Most customers configure their firewalls to prevent incoming connections (that is, unsolicited packets from the Internet sent without a request). Firewalls employ different techniques to track data flow to distinguish between solicited and unsolicited traffic. In the context of TCP, the firewall tracks SYN and ACK packets, and the process is straightforward. UDP firewalls usually use heuristics based on packet addresses to associate traffic with UDP flows and allow or block it. --There are many different NAT implementations available. In most cases, NAT gateway and firewall are the functions of the same physical or virtual device. +Most networks typically include firewalls that inspect traffic and block it based on rules. Most customers configure their firewalls to prevent incoming connections (that is, unsolicited packets from the Internet sent without a request). Firewalls employ different techniques to track data flow to distinguish between solicited and unsolicited traffic. In the context of TCP, the firewall tracks SYN and ACK packets, and the process is straightforward. UDP firewalls usually use heuristics based on packet addresses to associate traffic with UDP flows and allow or block it. There are many different NAT implementations available. ### Connection sequence All connections begin by establishing a TCP-based [reverse connect transport](ne 1. When the client receives the list of candidates from the session host, the client also performs candidate gathering on its side. Then the client sends its candidate list to the session host. -1. After the session host and client exchange their candidate lists, both parties attempt to connect with each other using all the gathered candidates. This connection attempt is simultaneous on both sides. Many NAT gateways are configured to allow the incoming traffic to the socket as soon as the outbound data transfer initializes it. This behavior of NAT gateways is the reason the simultaneous connection is essential. If STUN fails because it's blocked, an indirect connection attempt is made using TURN. +1. After the session host and client exchange their candidate lists, both parties attempt to connect with each other using all the gathered candidates. This connection attempt is simultaneous on both sides. Many NAT gateways are configured to allow the incoming traffic to the socket as soon as the outbound data transfer initializes it. This behavior of NAT gateways is the reason the simultaneous connection is essential. If STUN fails because it's blocked, a relayed connection attempt is made using TURN. 1. After the initial packet exchange, the client and session host may establish one or many data flows. From these data flows, RDP chooses the fastest network path. The client then establishes a secure connection using TLS over reliable UDP with the session host and initiates RDP Shortpath transport. All connections begin by establishing a TCP-based [reverse connect transport](ne If your users have both RDP Shortpath for managed network and public networks available to them, then the first-found algorithm will be used, meaning that the user will use whichever connection gets established first for that session. For more information, see [example scenario 4](#scenario-4). -> [!IMPORTANT] -> When using a TCP-based transport, outbound traffic from session host to client is through the Azure Virtual Desktop Gateway. With RDP Shortpath for public networks using STUN, outbound traffic is established directly between session host and client over the internet. This removes a hop which improves latency and end user experience. However, due to the changes in data flow between session host and client where the Gateway is no longer used, there will be standard [Azure egress network charges](https://azure.microsoft.com/pricing/details/bandwidth/) billed in addition per subscription for the internet bandwidth consumed. To learn more about estimating the bandwidth used by RDP, see [RDP bandwidth requirements](rdp-bandwidth.md). - ### Network configuration To support RDP Shortpath for public networks, you typically don't need any particular configuration. The session host and client will automatically discover the direct data flow if it's possible in your network configuration. However, every environment is unique, and some network configurations may negatively affect the rate of success of the direct connection. Follow the [recommendations](#general-recommendations) to increase the probability of a direct data flow. As RDP Shortpath uses UDP to establish a data flow, if a firewall on your network blocks UDP traffic, RDP Shortpath will fail and the connection will fall back to TCP-based reverse connect transport. Azure Virtual Desktop uses STUN servers provided by Azure Communication Services and Microsoft Teams. By the nature of the feature, outbound connectivity from the session hosts to the client is required. Unfortunately, you can't predict where your users are located in most cases. Therefore, we recommend allowing outbound UDP connectivity from your session hosts to the internet. To reduce the number of ports required, you can [limit the port range used by clients](configure-rdp-shortpath-limit-ports-public-networks.md) for the UDP flow. Use the following tables for reference when configuring firewalls for RDP Shortpath. -If your environment uses Symmetric NAT, which is the mapping of a single private source *IP:Port* to a unique public destination *IP:Port*, then you can use an indirect connection with TURN. This will be the case if you use Azure Firewall and Azure NAT Gateway. For more information about NAT with Azure virtual networks, see [Source Network Address Translation with virtual networks](../virtual-network/nat-gateway/nat-gateway-resource.md#source-network-address-translation). +If your environment uses Symmetric NAT, which is the mapping of a single private source *IP:Port* to a unique public destination *IP:Port*, then you can use a relayed connection with TURN. This will be the case if you use Azure Firewall and Azure NAT Gateway. For more information about NAT with Azure virtual networks, see [Source Network Address Translation with virtual networks](../virtual-network/nat-gateway/nat-gateway-resource.md#source-network-address-translation). -Where users have RDP Shortpath for both managed network and public networks is available to them, then the first algorithm found will be used. The user will use whichever connection gets established first for that session. For more information, see [Example scenarios](#example-scenarios). +We have some general recommendations for successful connections using RDP Shortpath for public networks. For more information, see [General recommendations](#general-recommendations). -#### TURN availability +Where users have RDP Shortpath for both managed network and public networks is available to them, then the first algorithm found will be used. The user will use whichever connection gets established first for that session. For more information, see [Example scenarios](#example-scenarios). -TURN is available in the following Azure regions: +The following sections contain the source, destination and protocol requirements for your session hosts and client devices that must be allowed for RDP Shortpath to work. - :::column::: - - Australia Southeast - - Central India - - East US - - East US 2 - - France Central - - Japan West - - North Europe - :::column-end::: - :::column::: - - South Central US - - Southeast Asia - - UK South - - UK West - - West Europe - - West US - - West US 2 - :::column-end::: +> [!NOTE] +> For a relayed connection with TURN, the IP subnet `20.202.0.0/16` is shared with Azure Communication Services. However, Azure Virtual Desktop and Windows 365 will transition to `51.5.0.0/16`, which is dedicated exclusively to these services. We recommend you configure both ranges in your network environment now to ensure a seamless transition. +> +> If you want to wait to use the dedicated subnet, please follow the steps in [Configure host pool networking settings](configure-rdp-shortpath.md#configure-host-pool-networking-settings) and set **RDP Shortpath for public network (via TURN/relay)** to **Disabled**. Alternatively you can disable UDP on the local device, but that will disable UDP for all connections. To disable UDP on the local device, follow the steps in [Check that UDP is enabled on Windows client devices](configure-rdp-shortpath.md#check-that-udp-is-enabled-on-windows-client-devices), but set **Turn Off UDP On Client** to **Enabled**. If you block the IP range `20.202.0.0/16` on your network and are using VPN applications, it might cause disconnection issues. #### Session host virtual network +The following table details the source, destination and protocol requirements for RDP Shortpath for your session host virtual network. + | Name | Source | Source Port | Destination | Destination Port | Protocol | Action | |||::||::|::|::|-| RDP Shortpath Server Endpoint | VM subnet | Any | Any | 1024-65535<br />(*default 49152-65535*) | UDP | Allow | -| STUN/TURN UDP | VM subnet | Any | 20.202.0.0/16 | 3478 | UDP | Allow | -| STUN/TURN TCP | VM subnet | Any | 20.202.0.0/16 | 443 | TCP | Allow | +| STUN direct connection | VM subnet | Any | Any | 1024-65535<br />(*default 49152-65535*) | UDP | Allow | +| STUN infrastructure/TURN | VM subnet | Any | `20.202.0.0/16` | 3478 | UDP | Allow | +| TURN relay | VM subnet | Any | `51.5.0.0/16` | 3478 | UDP | Allow | #### Client network +The following table details the source, destination and protocol requirements for your client devices. + | Name | Source | Source Port | Destination | Destination Port | Protocol | Action | |||::||::|::|::|-| RDP Shortpath Server Endpoint | Client network | Any | Public IP addresses assigned to NAT Gateway or Azure Firewall (provided by the STUN endpoint) | 1024-65535<br />(*default 49152-65535*) | UDP | Allow | -| STUN/TURN UDP | Client network | Any | 20.202.0.0/16 | 3478 | UDP | Allow | -| STUN/TURN TCP | Client network | Any | 20.202.0.0/16 | 443 | TCP | Allow | +| STUN direct connection | Client network | Any | Public IP addresses assigned to NAT Gateway or Azure Firewall (provided by the STUN endpoint) | 1024-65535<br />(*default 49152-65535*) | UDP | Allow | +| STUN infrastructure/TURN relay | Client network | Any | `20.202.0.0/16` | 3478 | UDP | Allow | +| TURN relay | Client network | Any | `51.5.0.0/16` | 3478 | UDP | Allow | ### Teredo support A UDP connection can only be established between the client device and the sessi ### Scenario 2 -A firewall or NAT device is blocking a direct UDP connection, but an indirect UDP connection can be relayed using TURN between the client device and the session host over a public network (internet). Another direct connection, such as a VPN, isn't available. +A firewall or NAT device is blocking a direct UDP connection, but a relayed UDP connection can be relayed using TURN between the client device and the session host over a public network (internet). Another direct connection, such as a VPN, isn't available. :::image type="content" source="media/rdp-shortpath/rdp-shortpath-scenario-2.png" alt-text="Diagram that shows RDP Shortpath for public networks uses TURN." border="false"::: In this example, UDP is blocked on the direct VPN connection and the ICE/STUN pr ### Scenario 6 -Both RDP Shortpath for public networks and managed networks are configured, however a UDP connection couldn't be established using direct VPN connection. A firewall or NAT device is also blocking a direct UDP connection using the public network (internet), but an indirect UDP connection can be relayed using TURN between the client device and the session host over a public network (internet). +Both RDP Shortpath for public networks and managed networks are configured, however a UDP connection couldn't be established using direct VPN connection. A firewall or NAT device is also blocking a direct UDP connection using the public network (internet), but a relayed UDP connection can be relayed using TURN between the client device and the session host over a public network (internet). :::image type="content" source="media/rdp-shortpath/rdp-shortpath-scenario-6.png" alt-text="Diagram that shows UDP is blocked on the direct VPN connection and a direct connection using a public network also fails. TURN relays the connection over the public network." border="false"::: |
virtual-network | Public Ip Basic Upgrade Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md | We recommend the following approach to upgrade to Standard SKU public IP address | Virtual Machine | Use scripts or manually detach and upgrade public IPs. For standalone virtual machines, you can use the [upgrade script](public-ip-upgrade-vm.md) or for virtual machines in an availability set use [this script](public-ip-upgrade-availability-set.md). | | Virtual Machine Scale Sets | [Replace basic SKU instance public IP addresses](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-networking#public-ipv4-per-virtual-machine) with new standard SKU | | Load Balancer (Basic SKU) | New Load Balancer SKU required. Use the upgrade script [Upgrade Basic Load Balancer to Standard SKU](../../load-balancer/upgrade-basic-standard-with-powershell.md) to upgrade to Standard Load Balancer |- | VPN Gateway (using Basic IPs) |At this time, it's not necessary to upgrade. When an upgrade is necessary, we'll update this decision path with migration information and send out a service health alert. | + | VPN Gateway (using Basic IPs) | A migration path will be provided in the future. When this migration path is available, we'll update this decision path with migration information and send out a service health alert. | | ExpressRoute Gateway (using Basic IPs) | New ExpressRoute Gateway is required. Follow the [ExpressRoute Gateway migration guidance](../../expressroute/gateway-migration.md) for upgrading from Basic to Standard SKU. | | Application Gateway (v1 SKU) | New AppGW SKU required. Use this [migration script to migrate from v1 to v2](../../application-gateway/migrate-v1-v2.md). | |
virtual-wan | Virtual Wan Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md | For any reason, if the VPN connection becomes the primary medium for the virtual ### Does ExpressRoute support Equal-Cost Multi-Path (ECMP) routing in Virtual WAN? -When multiple ExpressRoute circuits are connected to a Virtual WAN hub, ECMP enables traffic from spoke virtual networks to on-premises over ExpressRoute to be distributed across all ExpressRoute circuits advertising the same on-premises routes. To enable ECMP for your Virtual WAN hub, please reach out to virtual-wan-ecmp@microsoft.com with your Virtual WAN hub resource ID. +When multiple ExpressRoute circuits are connected to a Virtual WAN hub, ECMP enables traffic from spoke virtual networks to on-premises over ExpressRoute to be distributed across all ExpressRoute circuits advertising the same on-premises routes. ECMP is currently not enabled by default for Virtual WAN hubs. ### <a name="expressroute-bow-tie"></a>When two hubs (hub 1 and 2) are connected and there's an ExpressRoute circuit connected as a bow-tie to both the hubs, what is the path for a VNet connected to hub 1 to reach a VNet connected in hub 2? |
vpn-gateway | Gateway Sku Consolidation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-consolidation.md | Yes, you can deploy AZ SKUs in all regions. If a region doesn't currently suppor ### Can I migrate my Gen 1 gateway to Gen 2 gateway? -* As part of the Basic IP to Standard IP migration, the gateways will be upgraded to Gen2. This upgrade will occur automatically when you initiate the migration. +* For gateways using Basic IP, you will need to migrate your gateway to use Standard IP when the migration tool becomes available. As part of the Basic IP to Standard IP migration, the gateways will be upgraded to Gen2 with no further action needed. * For gateways already using Standard IP, we will migrate them to Gen2 separately before Sep 30, 2026. This will be done seamlessly during regular updates, with no downtime involved. ### Will there be downtime during migrating my Non-AZ gateways? |