Updates from: 10/04/2024 01:07:50
Service Microsoft Docs article Related commit history on GitHub Change details
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
For details about monitoring options, see [Observability in Azure API Management
| [OpenTelemetry Collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) | ❌ | ❌ | ❌ | ✔️ | ❌ | | [Request logs in Azure Monitor and Log Analytics](api-management-howto-use-azure-monitor.md#resource-logs) | ✔️ | ✔️ | ❌ | ❌<sup>3</sup> | ❌ | | [Local metrics and logs](how-to-configure-local-metrics-logs.md) | ❌ | ❌ | ❌ | ✔️ | ❌ |
-| [Request tracing](api-management-howto-api-inspector.md) | ✔️ | ❌<sup>4</sup> | ✔️ | ✔️ | ❌ |
+| [Request tracing](api-management-howto-api-inspector.md) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
<sup>1</sup> The v2 tiers support Azure Monitor-based analytics.<br/> <sup>2</sup> Gateway uses [Azure Application Insight's built-in memory buffer](/azure/azure-monitor/app/telemetry-channels#built-in-telemetry-channels) and does not provide delivery guarantees.<br/> <sup>3</sup> The self-hosted gateway currently doesn't send resource logs (diagnostic logs) to Azure Monitor. Optionally [send metrics](how-to-configure-cloud-metrics-logs.md) to Azure Monitor, or [configure and persist logs locally](how-to-configure-local-metrics-logs.md) where the self-hosted gateway is deployed.<br/>
-<sup>4</sup> Tracing is currently unavailable in the v2 tiers.
### Authentication and authorization
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
The following constraints currently apply to workspace gateways:
* Workspace gateways don't support the API Management service's credential manager * Workspace gateways support only internal cache; external cache isn't supported * Workspace gateways don't support synthetic GraphQL APIs and WebSocket APIs
-* Workspace gateways don't support APIs created from Azure resources such as Azure OpenAI Service, App Service, Function Apps, and so on
+* Workspace gateways don't support creating APIs directly from Azure resources such as Azure OpenAI Service, App Service, Function Apps, and so on
* Request metrics can't be split by workspace in Azure Monitor; all workspace metrics are aggregated at the service level * Azure Monitor logs are aggregated at the service level; workspace-level logs aren't available * Workspace gateways don't support CA certificates
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
description: Learn to configure common settings for an App Service app. App sett
keywords: azure app service, web app, app settings, environment variables ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0 Previously updated : 06/04/2024 Last updated : 10/03/2024 ms.devlang: azurecli
Here, you can configure some common settings for the app. Some settings require
- **Always On**: Keeps the app loaded even when there's no traffic. When **Always On** isn't turned on (default), the app is unloaded after 20 minutes without any incoming requests. The unloaded app can cause high latency for new requests because of its warm-up time. When **Always On** is turned on, the front-end load balancer sends a GET request to the application root every five minutes. The continuous ping prevents the app from being unloaded. Always On is required for continuous WebJobs or for WebJobs that are triggered using a CRON expression.
- - **ARR affinity**: In a multi-instance deployment, ensure that the client is routed to the same instance for the life of the session. You can set this option to **Off** for stateless applications.
+ - **Session affinity**: In a multi-instance deployment, ensure that the client is routed to the same instance for the life of the session. You can set this option to **Off** for stateless applications.
+ - **Session affinity proxy**: Session affinity proxy can be turned on if your app is behind a reverse proxy (like Azure Application Gateway or Azure Front Door) and you are using the default host name. The domain for the session affinity cookie will align with the forwarded host name from the reverse proxy.
- **HTTPS Only**: When enabled, all HTTP traffic is redirected to HTTPS. - **Minimum TLS version**: Select the minimum TLS encryption version required by your app. - **Debugging**: Enable remote debugging for [ASP.NET](troubleshoot-dotnet-visual-studio.md#remotedebug), [ASP.NET Core](/visualstudio/debugger/remote-debugging-azure), or [Node.js](configure-language-nodejs.md#debug-remotely) apps. This option turns off automatically after 48 hours.
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
Here are some common swap errors:
- Local cache initialization might fail when the app content exceeds the local disk quota specified for the local cache. For more information, see [Local cache overview](overview-local-cache.md). -- During a site update operation, the following error may occur "_The slot cannot be changed because its configuration settings have been prepared for swap_". This can occur if either [swap with preview (multi-phase swap)](#swap-with-preview-multi-phase-swap) phase 1 has been completed but phase 2 has not yet been performed, or a swap has failed. There are two ways resolve the issue:
+- During a site update operation, the following error may occur "_The slot cannot be changed because its configuration settings have been prepared for swap_". This can occur if either [swap with preview (multi-phase swap)](#swap-with-preview-multi-phase-swap) phase 1 has been completed but phase 2 has not yet been performed, or a swap has failed. There are two ways to resolve this issue:
1. Cancel the swap operation which will reset the site back to the old state 1. Complete the swap operation which will update site to the desired new state
app-service Overview Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-tls.md
Transport Layer Security (TLS) is a widely adopted security protocol designed to
For incoming requests to your web app, App Service supports TLS versions 1.0, 1.1, 1.2, and 1.3.
+### Set Minimum TLS Version
+Follow these steps to change the Minimum TLS version of your App Service resource:
+1. Browse to your app in the [Azure portal](https://portal.azure.com/)
+1. In the left menu, select **configuration** and then select the **General settings** tab.
+1. On __Minimum Inbound TLS Version__, using the dropdown, select your desired version.
+1. Select **Save** to save the changes.
+
+### Minimum TLS Version with Azure Policy
+
+You can use Azure Policy to help audit your resources when it comes to minimum TLS version. You can refer to [App Service apps should use the latest TLS version policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) and change the values to your desired minimum TLS version. For similar policy definitions for other App Service resources, refer to [List of built-in policy definitions - Azure Policy for App Service](../governance/policy/samples/built-in-policies.md#app-service).
+ ### Minimum TLS Version and SCM Minimum TLS Version App Service also allows you to set minimum TLS version for incoming requests to your web app and to SCM site. By default, the minimum TLS version for incoming requests to your web app and to SCM would be set to 1.2 on both portal and API.
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chrom
To support this change, starting February 17 2020, Application Gateway (all the SKU types) will inject another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. The *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it (*"SameSite=None; Secure"*) so that sticky sessions are maintained even for cross-origin requests.
-Note that the default affinity cookie name is *ApplicationGatewayAffinity* and you can change it. If you deploy multiple application gateway instances in the same network topology, you must set unique cookie names for each instance. If you're using a custom affinity cookie name, an additional cookie is added with `CORS` as suffix. For example: *CustomCookieNameCORS*.
+Note that the default affinity cookie name is *ApplicationGatewayAffinity* and you can change it. If in your network topology, you deploy multiple application gateways in line, you must set unique cookie names for each resource. If you're using a custom affinity cookie name, an additional cookie is added with `CORS` as suffix. For example: *CustomCookieNameCORS*.
> [!NOTE] > If the attribute *SameSite=None* is set, it is mandatory that the cookie also contains the *Secure* flag, and must be sent over HTTPS. If session affinity is required over CORS, you must migrate your workload to HTTPS.
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
Also, Azure Cache for Redis provides more replica nodes in the Premium tier. A [
## Zone redundancy
-Applicable tiers: **Standard (preview)**, **Premium (preview)**, **Enterprise**, **Enterprise Flash**
+Applicable tiers: **Standard (preview)**, **Premium**, **Enterprise**, **Enterprise Flash**
Recommended for: **High availability**, **Disaster recovery - intra region**
-Azure Cache for Redis supports zone redundant configurations in the Standard (preview), Premium (preview), and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../reliability/availability-zones-overview.md) in the same region. It eliminates data center or Availability Zone outage as a single point of failure and increases the overall availability of your cache.
+Azure Cache for Redis supports zone redundant configurations in the Standard (preview), Premium, and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../reliability/availability-zones-overview.md) in the same region. It eliminates data center or Availability Zone outage as a single point of failure and increases the overall availability of your cache.
+
+> [!NOTE]
+> On the Premium caches, only _automatic zone allocation_ is in public preview. Manual selection of availability zones us unchanged. Manual selection is GA (General Availability).
If a cache is configured to use two or more zones as described earlier in the article, the cache nodes are created in different zones. When a zone goes down, cache nodes in other zones are available to keep the cache functioning as usual.
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
Last updated 08/05/2024
# Enable zone redundancy for Azure Cache for Redis
-In this article, you'll learn how to configure a zone-redundant Azure Cache instance using the Azure portal.
+In this article, you learn how to configure a zone-redundant Azure Cache instance using the Azure portal.
-Azure Cache for Redis Standard (Preview), Premium (Premium), and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](/azure/virtual-machines/availability) and highly available, they're susceptible to data center-level failures. Azure Cache for Redis also supports zone redundancy in its Standard (preview), Premium (preview) and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [Availability Zones](../reliability/availability-zones-overview.md). It provides higher resilience and availability.
+Azure Cache for Redis Standard (preview), Premium, and Enterprise tiers provide built-in redundancy by hosting each cache on two dedicated virtual machines (VMs). Even though these VMs are located in separate [Azure fault and update domains](/azure/virtual-machines/availability) and highly available, they're susceptible to data center-level failures. Azure Cache for Redis also supports zone redundancy in its Standard (preview), Premium, and Enterprise tiers. A zone-redundant cache runs on VMs spread across multiple [Availability Zones](../reliability/availability-zones-overview.md). It provides higher resilience and availability.
## Prerequisites
To create a cache, follow these steps:
| **Subscription** | Select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. | | **Resource group** | Select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. | | **DNS name** | Enter a globally unique name. | The cache name must be a string between 1 and 63 characters that contains only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* will be *\<DNS name>.redis.cache.windows.net*. |
- | **Location** | Select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your cache. |
+ | **Location** | Select a location. | Select a [region](https://azure.microsoft.com/regions/) near other services that use your cache. |
| **Cache type** | Select a [Premium or Enterprise tier](https://azure.microsoft.com/pricing/details/cache/) cache. | The pricing tier determines the size, performance, and features that are available for the cache. For more information, see [Azure Cache for Redis Overview](cache-overview.md). | 1. For Standard or Premium tier cache, select **Advanced** in the Resource menu. To enable zone resiliency with automatic zone allocation, select **(Preview) Select zones automatically**.
- :::image type="content" source="media/cache-how-to-zone-redundancy/cache-availability-zone.png" alt-text="Screenshot showing the Advanced tab with a red box around Availability zones.:":::
+ > [!NOTE]
+ > On the Premium caches, only _automatic zone selection_ is in public preview. Manual selection of availability zones us unchanged. Manual selection is GA (General Availability).
+
+ :::image type="content" source="media/cache-how-to-zone-redundancy/cache-availability-zone.png" alt-text="Screenshot showing the Advanced tab with a red box around Availability zones.":::
For an Enterprise tier cache, select **Advanced** in the Resource menu. For **Zone redundancy**, select **Zone redundant (recommended)**.
To create a cache, follow these steps:
> Automatic Zone Allocation cannot be modified once enabled for a cache. > [!IMPORTANT]
- > Enabling Automatic Zone Allocation is currently NOT supported for Geo Replicated caches or caches with VNET injection.
+ > Enabling Automatic Zone Allocation (preview) is currently NOT supported for Geo-replicated caches or caches with VNET injection.
-1. Availability zones can be selected manually for Premium tier caches. The count of availability zones must always be less than or equal to the Replica count for the cache.
+1. Availability zones can be selected manually for Premium tier caches. The number of availability zones must always be less than or equal to the total number of nodes for the cache.
:::image type="content" source="media/cache-how-to-zone-redundancy/cache-premium-replica-count.png" alt-text="Screenshot showing Availability zones set to one and Replica count set to three.":::
Zone redundancy is available only in Azure regions that have Availability Zones.
### Why can't I select all three zones during cache create?
-A Premium cache has one primary and one replica node by default. To configure zone redundancy for more than two Availability Zones, you need to add [more replicas](cache-how-to-multi-replicas.md) to the cache you're creating.
+A Premium cache has one primary and one replica node by default. To configure zone redundancy for more than two Availability Zones, you need to add [more replicas](cache-how-to-multi-replicas.md) to the cache you're creating. The total number of availability zones must not exceed the combined count of nodes within the cache, including both the primary and replica nodes.
### Can I update my existing Standard or Premium cache to use zone redundancy?
-Yes, updating an existing Standard or Premium cache to use zone redundancy is supported. You can enable it by selecting **Allocate Zones automatically** from the **Advanced settings** on the Resource menu. You cannot disable zone redundancy once you have enabled it.
+Yes, updating an existing Standard or Premium cache to use zone redundancy is supported. You can enable it by selecting **Allocate Zones automatically** from the **Advanced settings** on the Resource menu. You can't disable zone redundancy once you enable it.
> [!IMPORTANT] > Automatic Zone Allocation cannot be modified once enabled for a cache.
Yes, updating an existing Standard or Premium cache to use zone redundancy is su
### How much does it cost to replicate my data across Azure Availability Zones?
-When your cache uses zone redundancy configured with multiple Availability Zones, data is replicated from the primary cache node in one zone to the other node(s) in another zone(s). The data transfer charge is the network egress cost of data moving across the selected Availability Zones. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
+When your cache uses zone redundancy configured with multiple Availability Zones, data is replicated from the primary cache node in one zone to the other nodes in another zone. The data transfer charge is the network egress cost of data moving across the selected Availability Zones. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
## Next Steps
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Last updated 08/05/2024
# What's New in Azure Cache for Redis
+## September 2024
+
+###Enterprise tier E1 SKU GA ###
+The E1 SKU, which is intended primarily for dev/test scenarios, is now generally available. It runs on smaller burstable virtual machines. As a result, E1 offers variable performance depending on how much CPU is consumed. Unlike other Enterprise offerings, it isn't possible to scale E1 out. However, it is still possible to scale up to a larger SKU. The E1 SKU also does not support active geo-replication.
+ ## August 2024 ### Availability zones
You are able to manually trigger an upgrade to the latest version of Redis softw
The E1 SKU is intended primarily for dev/test scenarios. It runs on smaller [burstable virtual machines](/azure/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model). As a result, E1 offers variable performance depending on how much CPU is consumed. Unlike other Enterprise offerings, it isn't possible to scale E1 out. However, it is still possible to scale up to a larger SKU. The E1 SKU also does not support [active geo-replication](cache-how-to-active-geo-replication.md).
-For more information, see
### .NET Output cache and HybridCache
azure-functions Azfd0011 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0011.md
Last updated 01/24/2024
# AZFD0011: The FUNCTIONS_WORKER_RUNTIME setting is required
-This event occurs when a function app doesn't have the `FUNCTIONS_WORKER_RUNTIME` application setting, which is required.
+This event occurs when a function app doesn't have a value for the `FUNCTIONS_WORKER_RUNTIME` application setting, which is required.
| | Value | |-|-|
This event occurs when a function app doesn't have the `FUNCTIONS_WORKER_RUNTIME
## Event description
-The `FUNCTIONS_WORKER_RUNTIME` application setting indicates the language or language stack on which the function app runs, such as `python`. For more information on valid values, see the [`FUNCTIONS_WORKER_RUNTIME`](../../functions-app-settings.md#functions_worker_runtime) reference.
+The `FUNCTIONS_WORKER_RUNTIME` application setting indicates the language or language stack on which the function app runs, such as `python`. For more information on valid values, see the [`FUNCTIONS_WORKER_RUNTIME`][fwr] reference.
-While not currently required, you should always specify `FUNCTIONS_WORKER_RUNTIME` for your function apps. When you don't have this setting and the Functions host can't determine the correct language or language stack, you might see exceptions or unexpected behaviors.
+You should always specify a valid `FUNCTIONS_WORKER_RUNTIME` for your function apps. When you don't have this setting and the Functions host can't determine the correct language or language stack, you might see performance degradations, exceptions, or unexpected behaviors. To ensure that your application operates as intended, you should explicitly set it in all of your existing function apps and deployment scripts.
-Because `FUNCTIONS_WORKER_RUNTIME` is likely to become a required setting, you should explicitly set it in all of your existing function apps and deployment scripts to prevent any downtime in the future.
+The value of `FUNCTIONS_WORKER_RUNTIME` should align with the language stack used to create the deployed application payload. If these do not align, you may see the [`AZFD0013`](./azfd0013.md) event.
## How to resolve the event
-In a production application, add `FUNCTIONS_WORKER_RUNTIME` to the [application settings](../../functions-how-to-use-azure-function-app-settings.md#settings).
+In a production application, set [`FUNCTIONS_WORKER_RUNTIME` to a valid value][fwr] in the [application settings](../../functions-how-to-use-azure-function-app-settings.md#settings). The value should align with the language stack used to create the application payload.
-When running locally in Azure Functions Core Tools, also add `FUNCTIONS_WORKER_RUNTIME` to the [local.settings.json file](../../functions-develop-local.md#local-settings-file).
+When running locally in Azure Functions Core Tools, also set [`FUNCTIONS_WORKER_RUNTIME` to a valid value][fwr] in the [local.settings.json file](../../functions-develop-local.md#local-settings-file). The value should align with the language stack used in the local project.
## When to suppress the event This event shouldn't be suppressed.+
+[fwr]: ../../functions-app-settings.md#functions_worker_runtime
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Title: Azure and other Microsoft cloud services compliance scope
-description: This article tracks FedRAMP and DoD compliance scope for Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services across Azure, Azure Government, and Azure Government Secret cloud environments.
+description: FedRAMP & DoD compliance scope for Azure, Dynamics 365, Microsoft 365, and Power Platform for Azure, Azure Government, & Azure Government Secret.
-+
azure-government Compliance Tic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/compliance-tic.md
Title: Trusted Internet Connections guidance
-description: Learn about Trusted Internet Connections (TIC) guidance for Azure IaaS and PaaS services
+description: Learn how you can use Trusted Internet Connections (TIC) guidance for your Azure IaaS and PaaS services
-+ recommendations: false
azure-government Documentation Accelerate Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/documentation-accelerate-compliance.md
Title: How to accelerate your journey to FedRAMP compliance with Azure
+ Title: Accelerate your journey to FedRAMP compliance with Azure
description: Provides an overview of resources for Development, Automation, and Advisory partners to help them accelerate their path to ATO with Azure. cloud: gov -+ Last updated 05/30/2023- # FedRAMP compliance program overview
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Title: Azure Government authorized reseller list
-description: Comprehensive list of Azure Government cloud solution providers, resellers, and distributors.
+description: Comprehensive list of all the authorized Azure Government cloud solution providers, resellers, and distributors.
-+ Last updated 10/31/2023 # Azure Government authorized reseller list
-Since the launch of [Azure Government services in the Cloud Solution Provider (CSP) program)](https://azure.microsoft.com/blog/announcing-microsoft-azure-government-services-in-the-cloud-solution-provider-program/), we've worked with the partner community to bring them the benefits of this channel. Our goal is to enable the partner community to resell Azure Government and help them grow their business while providing customers with cloud services they need.
+From the launch of [Azure Government services in the Cloud Solution Provider (CSP) program)](https://azure.microsoft.com/blog/announcing-microsoft-azure-government-services-in-the-cloud-solution-provider-program/), Microsoft worked with the partner community to bring them the benefits of this channel. The goal is to enable the partner community to resell Azure Government and help them grow their business while providing customers with cloud services they need.
-Below you can find a list of all the authorized Cloud Solution Providers (CSPs), Agreement for Online Services for Government (AOS-G), and Licensing Solution Providers (LSP) that can transact Azure Government. Updates to this list will be made as new partners are onboarded.
+The following tables contain lists of all the authorized Cloud Solution Providers (CSPs), Agreement for Online Services for Government (AOS-G), and Licensing Solution Providers (LSP) that can transact Azure Government. We update this list as new partners are onboarded.
## Approved direct CSPs
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following table represents the set of supported browsers, which are currentl
| Platform | Chrome | Safari | Edge | Firefox | Webview | Electron | | | | | | - | - | - | | Android | ✔️ | ❌ | ✔️ | ❌ | ✔️ | ❌ |
-| iOS | ✔️ | ✔️ | ❌ | ❌ | ✔️ | ❌ |
+| iOS | ✔️ | ✔️ | ✔️ | ❌ | ✔️ | ❌ |
| macOS | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | ✔️ | | Windows | ✔️ | ❌ | ✔️ | ✔️ | ❌ | ✔️ | | Ubuntu/Linux | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
The following table represents the set of supported browsers, which are currentl
- Firefox support is in public preview. - Currently, the calling SDK only supports Android System WebView on Android, iOS WebView(WKWebView) in public preview. Other types of embedded browsers or WebView on other OS platforms aren't officially supported, for example, GeckoView, Chromium Embedded Framework (CEF), Microsoft Edge WebView2. Running JavaScript Calling SDK on these platforms isn't actively tested, it might or might not work. - [An iOS app on Safari can't enumerate/select mic and speaker devices](../known-issues.md#enumerating-devices-isnt-possible-in-safari-when-the-application-runs-on-ios-or-ipados) (for example, Bluetooth). This issue is a limitation of iOS, and the operating system controls default device selection.
+- iOS Edge browser support is available in public preview in WebJS SDK version [1.30.1-beta.1](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1301-beta1-2024-10-01) and higher.
## Calling client - browser security model
communication-services Spotlight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/spotlight.md
zone_pivot_groups: acs-plat-web-ios-android-windows
# Spotlight states
-In this article, you learn how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability allows users in the call or meeting to pin and unpin videos for everyone. The maximum limit of pinned videos is seven.
+
+This article describes how to implement Microsoft Teams spotlight capability with Azure Communication Services Calling SDKs. This capability enables users in the call or meeting to pin and unpin videos for everyone. The maximum limit of pinned videos is seven.
Since the video stream resolution of a participant is increased when spotlighted, it should be noted that the settings done on [Video Constraints](../../concepts/voice-video-calling/video-constraints.md) also apply to spotlight.
+## Overview
+
+Spotlighting a video is like pinning it for everyone in the meeting. The organizer, co-organizer, or presenter can choose up to seven people's video feeds (including their own) to highlight for everyone else.
+
+The two different ways to spotlight are your own video and someone else's video (up to seven people).
+
+When a user spotlights or pins someone in the meeting, this increases the height or width of the participant grid depending on the orientation.
+
+Remind participants that they can't spotlight a video if their view is set to Large gallery or Together mode.
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Since the video stream resolution of a participant is increased when spotlighted
## Support
-The following tables define support for Spotlight in Azure Communication Services.
-### Identities & call types
+The following tables define support for spotlight in Azure Communication Services.
+
+## Identities and call types
+ The following table shows support for call and identity types.
-|Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call |
-|--|||-|||--|
-|Communication Services user | ✔️ | ✔️ | | ✔️ | | ✔️ |
-|Microsoft 365 user | ✔️ | ✔️ | | ✔️ | | ✔️ |
+| Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call |
+| | | | | | | |
+|Communication Services user | ✔️ | ✔️ | | ✔️ | | ✔️ |
+|Microsoft 365 user | ✔️ | ✔️ | | ✔️ | | ✔️ |
-### Operations
-The following table shows support for individual APIs in Calling SDK to individual identity types.
+## Operations
-|Operations | Communication Services user | Microsoft 365 user |
-|--||-|
-| startSpotlight | ✔️ [1] | ✔️ [1] |
-| stopSpotlight | ✔️ | ✔️ |
-| stopAllSpotlight | ✔️ [1] | ✔️ [1] |
-| getSpotlightedParticipants | ✔️ | ✔️ |
+Azure Communication Services or Microsoft 365 users can call spotlight operations based on role type and conversation type.
-[1] In Teams meeting scenarios, these APIs are only available to users with role organizer, co-organizer, or presenter.
+The following table shows support for individual operations in Calling SDK to individual identity types.
-### SDKs
-The following table shows support for Spotlight feature in individual Azure Communication Services SDKs.
+**In one-to-one calls, group calls, and meeting scenarios the following operations are supported for both Communication Services and Microsoft 365 users**
-| Platforms | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows |
-||--|--|--|--|-|--||
-|Is Supported | ✔️ | ✔️ | ✔️ | | ✔️ | | ✔️ |
+| Operations | Communication Services user | Microsoft 365 user |
+| | | |
+| `startSpotlight` | ✔️ [1] | ✔️ [1] |
+| `stopSpotlight` | ✔️ | ✔️ |
+| `stopAllSpotlight` | ✔️ [1] | ✔️ [1] |
+| `getSpotlightedParticipants` | ✔️ | ✔️ |
+| `StartSpotlightAsync` | ✔️ [1] | ✔️ [1] |
+| `StopSpotlightAsync` | ✔️ [1] | ✔️ [1] |
+| `StopAllSpotlightAsync` | ✔️ [1] | ✔️ [1] |
+| `SpotlightedParticipants` | ✔️ [1] | ✔️ [1] |
+| `MaxSupported` | ✔️ [1] | ✔️ [1] |
+
+[1] In Teams meeting scenarios, these operations are only available to users with role organizer, co-organizer, or presenter.
+
+## SDKs
+
+The following table shows support for spotlight feature in individual Azure Communication Services SDKs.
+
+| Platforms | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows |
+| | | | | | | | |
+| Is Supported | ✔️ | ✔️ | ✔️ | | ✔️ | | ✔️ |
::: zone pivot="platform-web" [!INCLUDE [Spotlight Client-side JavaScript](./includes/spotlight/spotlight-web.md)]
The following table shows support for Spotlight feature in individual Azure Comm
[!INCLUDE [Spotlight Client-side Windows](./includes/spotlight/spotlight-windows.md)] ::: zone-end
+## Troubleshooting
+
+| Code | Subcode | Result Category | Reason | Resolution |
+| | | | | |
+| 400 | 45900 | ExpectedError | All provided participant IDs are already spotlighted. | Only participants who aren't currently spotlighted can be spotlighted. |
+| 400 | 45902 | ExpectedError | The maximum number of participants are already spotlighted. | Only seven participants can be in the spotlight state at any given time. |
+| 403 | 45903 | ExpectedError | Only participants with the roles of organizer, co-organizer, or presenter can initiate a spotlight. | Ensure the participant calling the `startSpotlight` operation has the role of organizer, co-organizer, or presenter. |
+ ## Next steps+ - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md) - [Learn how to record calls](./record-calls.md)
communication-services Together Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/together-mode.md
Title: Together Mode
-description: Make your Microsoft Teams virtual meetings feel more personal with Teams together mode.
+description: Make your Microsoft Teams virtual meetings feel more personal with Teams Together Mode.
Last updated 07/17/2024
- # Together Mode
-In this article, you learn how to implement Microsoft Teams Together Mode with Azure Communication Services Calling SDKs. This feature enhances virtual meetings and calls, making them feel more personal. By creating a unified view that places everyone in a shared background, participants can connect seamlessly and collaborate effectively.
+
+This article describes how to implement Microsoft Teams Together Mode with Azure Communication Services Calling SDKs. Together Mode enhances virtual meetings and calls, making them feel more personal. By creating a unified view that places everyone in a shared background, participants can connect seamlessly and collaborate effectively.
[!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include-document.md)]
The following tables define support for Together Mode in Azure Communication Ser
### Identities and call types The following table shows support for call and identity types.
-|Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call |
-|--|||-|||--|
+|Identities | Teams meeting | Room | 1:1 call | Group call | 1:1 Teams interop call | Group Teams interop call |
+| | | | | | | |
|Communication Services user | ✔️ | | | ✔️ | | ✔️ | |Microsoft 365 user | ✔️ | | | ✔️ | | ✔️ | ### Operations+ The following table shows support for individual APIs in Calling SDK to individual identity types.
-|Operations | Communication Services user | Microsoft 365 user |
-|--||-|
-| Start together mode stream | | ✔️ [1] |
-| Get together mode stream | ✔️ | ✔️ |
+|Operations | Communication Services user | Microsoft 365 user |
+| | | |
+| Start Together Mode stream | | ✔️ [1] |
+| Get Together Mode stream | ✔️ | ✔️ |
| Get scene size | ✔️ | ✔️ | | Get seating map | ✔️ | ✔️ | | Change scene | | |
The following table shows support for individual APIs in Calling SDK to individu
[1] Start Together Mode can only be called by a Microsoft 365 user with the role of organizer, co-organizer, or presenter. ### SDKs+ The following table shows support for Together Mode feature in individual Azure Communication Services SDKs. | Platforms | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows |
-||--|--|--|--|-|--||
-|Is Supported | ✔️ | | | | | | |
+| | | | | | | | |
+| Is Supported | ✔️ | | | | | | |
-## Together Mode
[!INCLUDE [Together Mode Client-side JavaScript](./includes/together-mode/together-mode-web.md)]
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
Last updated 07/14/2023
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Quickstart: Join your calling app to a Teams Auto Attendant
-In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Auto Attendant. You are going to achieve it with the following steps:
-
-1. Enable federation of Azure Communication Services resource with Teams Tenant.
-2. Select or create Teams Auto Attendant via Teams Admin Center.
-3. Get email address of Auto Attendant via Teams Admin Center.
-4. Get Object ID of the Auto Attendant via Graph API.
-5. Start a call with Azure Communication Services Calling SDK.
-
-If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/voice-apps-calling).
--
-## Create or select Teams Auto Attendant
-
-Teams Auto Attendant is system that provides an automated call handling system for incoming calls. It serves as a virtual receptionist, allowing callers to be automatically routed to the appropriate person or department without the need for a human operator. You can select existing or create new Auto Attendant via [Teams Admin Center](https://aka.ms/teamsadmincenter).
-
-Learn more about how to create Auto Attendant using Teams Admin Center [here](/microsoftteams/create-a-phone-system-auto-attendant?tabs=general-info).
-
-## Find Object ID for Auto Attendant
-
-After Auto Attendant is created, we need to find correlated Object ID to use it later for calls. Object ID is connected to Resource Account that was attached to Auto Attendant - open [Resource Accounts tab](https://admin.teams.microsoft.com/company-wide-settings/resource-accounts) in Teams Admin and find email of account.
-All required information for Resource Account can be found in [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) using this email in the search.
-
-```console
-https://graph.microsoft.com/v1.0/users/lab-test2-cq-@contoso.com
-```
-In results we'll are able to find "ID" field
-```json
- "userPrincipalName": "lab-test2-cq@contoso.com",
- "id": "31a011c2-2672-4dd0-b6f9-9334ef4999db"
-```
## Clean up resources
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
Last updated 07/14/2023
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Quickstart: Join your calling app to a Teams call queue
-In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Call Queue. You are going to achieve it with the following steps:
-1. Enable federation of Azure Communication Services resource with Teams Tenant.
-2. Select or create Teams Call Queue via Teams Admin Center.
-3. Get email address of Call Queue via Teams Admin Center.
-4. Get Object ID of the Call Queue via Graph API.
-5. Start a call with Azure Communication Services Calling SDK.
-If you'd like to skip ahead to the end, you can download this quickstart as a sample on [GitHub](https://github.com/Azure-Samples/communication-services-javascript-quickstarts/tree/main/voice-apps-calling).
-
-## Create or select Teams Call Queue
-
-Teams Call Queue is a feature in Microsoft Teams that efficiently distributes incoming calls among a group of designated users or agents. It's useful for customer support or call center scenarios. Calls are placed in a queue and assigned to the next available agent based on a predetermined routing method. Agents receive notifications and can handle calls using Teams' call controls. The feature offers reporting and analytics for performance tracking. It simplifies call handling, ensures a consistent customer experience, and optimizes agent productivity. You can select existing or create new Call Queue via [Teams Admin Center](https://aka.ms/teamsadmincenter).
-
-Learn more about how to create Call Queue using Teams Admin Center [here](/microsoftteams/create-a-phone-system-call-queue?tabs=general-info).
-
-## Find Object ID for Call Queue
-
-After Call queue is created, we need to find correlated Object ID to use it later for calls. Object ID is connected to Resource Account that was attached to call queue - open [Resource Accounts tab](https://admin.teams.microsoft.com/company-wide-settings/resource-accounts) in Teams Admin and find email.
-All required information for Resource Account can be found in [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) using this email in the search.
-
-```console
-https://graph.microsoft.com/v1.0/users/lab-test2-cq-@contoso.com
-```
-
-In results we'll are able to find "ID" field
-
-```json
- "userPrincipalName": "lab-test2-cq@contoso.com",
- "id": "31a011c2-2672-4dd0-b6f9-9334ef4999db"
-```
- ## Clean up resources
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
To perform a side-by-side upgrade, complete the following steps:
To perform an in-place upgrade, you need to edit the existing linked service payload. 1. Update the type from ΓÇÿSnowflakeΓÇÖ to ΓÇÿSnowflakeV2ΓÇÖ.
-2. Modify the linked service payload from its legacy format to the new pattern. You can either fill in each field from the user interface after changing the type mentioned above, or update the payload directly through the JSON Editor. Refer to the [Linked service properties](#linked-service-properties) section in this article for the supported connection properties. The following examples show the differences in payload for the legacy and new Snowflake connectors:
-
- **Legacy Snowflake connector JSON payload:**
- ```json
- {
-ΓÇ» ΓÇ» "name": "Snowflake1",
-ΓÇ» ΓÇ» "type": "Microsoft.DataFactory/factories/linkedservices",
- ΓÇ» ΓÇ» "properties": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "annotations": [],
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "Snowflake",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "typeProperties": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "authenticationType": "Basic",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "connectionString": "jdbc:snowflake://<fake_account>.snowflakecomputing.com/?user=FAKE_USER&db=FAKE_DB&warehouse=FAKE_DW&schema=PUBLIC",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "encryptedCredential": "<placeholder>"
- ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "connectVia": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "referenceName": "AzureIntegrationRuntime",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "IntegrationRuntimeReference"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» }
- }
- ```
-
- **New Snowflake connector JSON payload:**
- ```json
- {
-ΓÇ» ΓÇ» "name": "Snowflake2",
-ΓÇ» ΓÇ» "type": "Microsoft.DataFactory/factories/linkedservices",
- ΓÇ» ΓÇ» "properties": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "parameters": {
- ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "schema": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "PUBLIC"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
- ΓÇ» ΓÇ» ΓÇ» ΓÇ» "annotations": [],
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "SnowflakeV2",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "typeProperties": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "authenticationType": "Basic",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "accountIdentifier": "<FAKE_Account",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "user": "FAKE_USER",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "database": "FAKE_DB",
- ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "warehouse": "FAKE_DW",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "encryptedCredential": "<placeholder>"
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» "connectVia": {
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "referenceName": "AutoResolveIntegrationRuntime",
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "IntegrationRuntimeReference"
- ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
-ΓÇ» ΓÇ» }
- }
- ```
-
-1. Update dataset to use the new linked service. You can either create a new dataset based on the newly created linked service, or update an existing dataset's type property from _SnowflakeTable_ to _SnowflakeV2Table_.
+1. Modify the linked service payload from its legacy format to the new pattern. You can either fill in each field from the user interface after changing the type mentioned above, or update the payload directly through the JSON Editor. Refer to the [Linked service properties](#linked-service-properties) section in this article for the supported connection properties. The following examples show the differences in payload for the legacy and new Snowflake connectors:
+
+ **Legacy Snowflake connector JSON payload:**
+ ```json
+ {
+ΓÇ» ΓÇ» "name": "Snowflake1",
+ΓÇ» ΓÇ» "type": "Microsoft.DataFactory/factories/linkedservices",
+ ΓÇ» ΓÇ» "properties": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "annotations": [],
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "Snowflake",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "typeProperties": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "authenticationType": "Basic",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "connectionString": "jdbc:snowflake://<fake_account>.snowflakecomputing.com/?user=FAKE_USER&db=FAKE_DB&warehouse=FAKE_DW&schema=PUBLIC",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "encryptedCredential": "<your_encrypted_credential_value>"
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "connectVia": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "referenceName": "AzureIntegrationRuntime",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "IntegrationRuntimeReference"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» }
+ }
+ ```
+
+ **New Snowflake connector JSON payload:**
+ ```json
+ {
+ΓÇ» ΓÇ» "name": "Snowflake2",
+ΓÇ» ΓÇ» "type": "Microsoft.DataFactory/factories/linkedservices",
+ ΓÇ» ΓÇ» "properties": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "parameters": {
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "schema": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "string",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "defaultValue": "PUBLIC"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» "annotations": [],
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "SnowflakeV2",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "typeProperties": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "authenticationType": "Basic",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "accountIdentifier": "<FAKE_Account",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "user": "FAKE_USER",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "database": "FAKE_DB",
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "warehouse": "FAKE_DW",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "encryptedCredential": "<placeholder>"
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "connectVia": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "referenceName": "AutoResolveIntegrationRuntime",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "type": "IntegrationRuntimeReference"
+ ΓÇ» ΓÇ» ΓÇ» ΓÇ» }
+ΓÇ» ΓÇ» }
+ }
+ ```
+
+ 1. Update dataset to use the new linked service. You can either create a new dataset based on the newly created linked service, or update an existing dataset's type property from _SnowflakeTable_ to _SnowflakeV2Table_.
## Differences between Snowflake and Snowflake (legacy)
dev-box Concept Dev Box Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-architecture.md
Microsoft Dev Box uses the *hosted on-behalf* architecture, which means that the
Microsoft Dev Box manages the capacity and in-region availability in the Microsoft Dev Box subscriptions. Microsoft Dev Box determines the Azure region to host your dev boxes based on the network connection you select when creating a dev box pool.
-Microsoft Dev Box encrypts the disk by default to protect your data. You don't need to enable BitLocker, and doing so can prevent you from accessing your dev box.
+To protect your data, Microsoft Dev Box encrypts the disk by default using a platform-managed key. You don't need to enable BitLocker and doing so can prevent you from accessing your dev box.
For more information about data storage and protection in Azure services see: [Azure customer data protection](/azure/security/fundamentals/protection-customer-data).
digital-twins How To Integrate Azure Signalr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-azure-signalr.md
-
-# Mandatory fields.
Title: Integrate with Azure SignalR Service-
-description: Learn how to stream Azure Digital Twins telemetry to clients using Azure SignalR
-- Previously updated : 08/21/2024---
-# Optional fields. Don't forget to remove # if you need a field.
-#
-#
-#
--
-# Integrate Azure Digital Twins with Azure SignalR Service
-
-In this article, you'll learn how to integrate Azure Digital Twins with [Azure SignalR Service](../azure-signalr/signalr-overview.md).
-
-The solution described in this article allows you to push digital twin telemetry data to connected clients, such as a single webpage or a mobile application. As a result, clients are updated with real-time metrics and status from IoT devices, without the need to poll the server or submit new HTTP requests for updates.
-
-## Prerequisites
-
-Here are the prerequisites you should complete before proceeding:
-
-* Before integrating your solution with Azure SignalR Service in this article, you should complete the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md), because this how-to article builds on top of it. The tutorial walks you through setting up an Azure Digital Twins instance that works with a virtual IoT device to trigger digital twin updates. This how-to article will connect those updates to a sample web app using Azure SignalR Service.
-
-* You'll need the following values from the tutorial:
- - Event Grid topic
- - Resource group
- - App service/function app name
-
-* You'll need [Node.js](https://nodejs.org/) installed on your machine.
-
-Be sure to sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, as you'll need to use it in this guide.
-
-### Download the sample applications
-
-First, download the required sample apps. You'll need both of the following samples:
-* [Azure Digital Twins end-to-end samples](/samples/azure-samples/digital-twins-samples/digital-twins-samples/): This sample contains an *AdtSampleApp* that holds two Azure functions for moving data around an Azure Digital Twins instance (you can learn about this scenario in more detail in [Connect an end-to-end solution](tutorial-end-to-end.md)). It also contains a *DeviceSimulator* sample application that simulates an IoT device, generating a new temperature value every second.
- - If you haven't already downloaded the sample as part of the tutorial in [Prerequisites](#prerequisites), [navigate to the sample](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) and select the **Browse code** button underneath the title. Doing so will take you to the GitHub repo for the samples, which you can download as a .zip by selecting the **Code** button and **Download ZIP**.
-
- :::image type="content" source="media/includes/download-repo-zip.png" alt-text="Screenshot of the digital-twins-samples repo on GitHub and the steps for downloading it as a zip." lightbox="media/includes/download-repo-zip.png":::
-
- This button will download a copy of the sample repo in your machine, as *digital-twins-samples-main.zip*. Unzip the folder.
-* [SignalR integration web app sample](/samples/azure-samples/digitaltwins-signalr-webapp-sample/digital-twins-samples/): This sample React web app will consume Azure Digital Twins telemetry data from an Azure SignalR Service.
- - Navigate to the sample link and use the same download process to download a copy of the sample to your machine, as *digitaltwins-signalr-webapp-sample-main.zip*. Unzip the folder.
-
-## Solution architecture
-
-You'll be attaching Azure SignalR Service to Azure Digital Twins through the path below. Sections A, B, and C in the diagram are taken from the architecture diagram of the [end-to-end tutorial prerequisite](tutorial-end-to-end.md). In this how-to article, you'll build section D on the existing architecture, which includes two new Azure functions that communicate with SignalR and client apps.
--
-## Create Azure SignalR instance
-
-Next, create an Azure SignalR instance to use in this article by following the instructions in [Create an Azure SignalR Service instance](../azure-signalr/signalr-quickstart-azure-functions-csharp.md#create-an-azure-signalr-service-instance) (for now, only complete the steps in this section).
-
-Leave the browser window open to the Azure portal, as you'll use it again in the next section.
-
-## Publish and configure the Azure Functions app
-
-In this section, you'll set up two Azure functions:
-* *negotiate* - A HTTP trigger function. It uses the *SignalRConnectionInfo* input binding to generate and return valid connection information.
-* *broadcast* - An [Event Grid](../event-grid/overview.md) trigger function. It receives Azure Digital Twins telemetry data through the event grid, and uses the output binding of the SignalR instance you created in the previous step to broadcast the message to all connected client applications.
-
-Start Visual Studio or another code editor of your choice, and open the code solution in the *digital-twins-samples-main\ADTSampleApp* folder. Then do the following steps to create the functions:
-
-1. In the *SampleFunctionsApp* project, create a new C# class called *SignalRFunctions.cs*.
-
-1. Replace the contents of the class file with the following code:
-
- :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/signalRFunction.cs":::
-
-1. In Visual Studio's **Package Manager Console** window, or any command window on your machine, navigate to the folder *digital-twins-samples-main\AdtSampleApp\SampleFunctionsApp*, and run the following command to install the `SignalRService` NuGet package to the project:
- ```cmd
- dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService --version 1.14.0
- ```
-
- Running this command should resolve any dependency issues in the class.
-
-1. Publish the function to Azure, using your preferred method.
-
- For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure).
-
-### Configure the function
-
-Next, configure the function to communicate with your Azure SignalR instance. You'll start by gathering the SignalR instance's connection string, and then add it to the functions app's settings.
-
-1. Go to the [Azure portal](https://portal.azure.com/) and search for the name of your SignalR instance in the search bar at the top of the portal. Select the instance to open it.
-1. Select **Keys** from the instance menu to view the connection strings for the SignalR service instance.
-1. Select the **Copy** icon to copy the **Primary CONNECTION STRING**.
-
- :::image type="content" source="media/how-to-integrate-azure-signalr/signalr-keys.png" alt-text="Screenshot of the Azure portal that shows the Keys page for the SignalR instance. The connection string is being copied." lightbox="media/how-to-integrate-azure-signalr/signalr-keys.png":::
-
-1. Finally, add your Azure SignalR connection string to the function's app settings, using the following Azure CLI command. Also, replace the placeholders with your resource group and app service/function app name from the [tutorial prerequisite](how-to-integrate-azure-signalr.md#prerequisites). The command can be run in [Azure Cloud Shell](https://shell.azure.com), or locally if you have the [Azure CLI installed on your machine](/cli/azure/install-azure-cli):
-
- ```azurecli-interactive
- az functionapp config appsettings set --resource-group <your-resource-group> --name <your-function-app-name> --settings "AzureSignalRConnectionString=<your-Azure-SignalR-ConnectionString>"
- ```
-
- The output of this command prints all the app settings set up for your Azure function. Look for `AzureSignalRConnectionString` at the bottom of the list to verify it's been added.
-
- :::image type="content" source="media/how-to-integrate-azure-signalr/output-app-setting.png" alt-text="Screenshot of the output in a command window, showing a list item called 'AzureSignalRConnectionString'.":::
-
-## Connect the function to Event Grid
-
-Next, subscribe the *broadcast* Azure function to the Event Grid topic you created during the [tutorial prerequisite](how-to-integrate-azure-signalr.md#prerequisites). This action will allow telemetry data to flow from the thermostat67 twin through the Event Grid topic and to the function. From here, the function can broadcast the data to all the clients.
-
-To broadcast the data, you'll create an Event subscription from your Event Grid topic to your *broadcast* Azure function as an endpoint.
-
-In the [Azure portal](https://portal.azure.com/), navigate to your Event Grid topic by searching for its name in the top search bar. Select **+ Event Subscription**.
--
-On the **Create Event Subscription** page, fill in the fields as follows (fields filled by default aren't mentioned):
-* **EVENT SUBSCRIPTION DETAILS** > **Name**: Give a name to your event subscription.
-* **ENDPOINT DETAILS** > **Endpoint Type**: Select **Azure Function** from the menu options.
-* **ENDPOINT DETAILS** > **Endpoint**: Select the **Select an endpoint** link, which will open a **Select Azure Function** window:
- - Fill in your **Subscription**, **Resource group**, **Function app**, and **Function** (**broadcast**). Some of these fields may autopopulate after selecting the subscription.
- - Select **Confirm Selection**.
--
-Back on the **Create Event Subscription** page, select **Create**.
-
-At this point, you should see two event subscriptions in the **Event Grid Topic** page.
--
-## Configure and run the web app
-
-In this section, you'll see the result in action. First, configure the sample client web app to connect to the Azure SignalR flow you've set up. Next, you'll start up the simulated device sample app that sends device telemetry data through your Azure Digital Twins instance. After that, you'll view the sample web app to see the simulated device data updating the sample web app in real time.
-
-### Configure the sample client web app
-
-Next, you'll configure the sample client web app. Start by gathering the HTTP endpoint URL of the *negotiate* function, and then use it to configure the app code on your machine.
-
-1. Go to the Azure portal's [Function apps](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp) page and select your function app from the list. In the app menu, select **Functions** and choose the **negotiate** function.
-
- :::image type="content" source="media/how-to-integrate-azure-signalr/functions-negotiate.png" alt-text="Screenshot of the Azure portal function apps, with 'Functions' highlighted in the menu and 'negotiate' highlighted in the list of functions.":::
-
-1. Select **Get function URL** and copy the value up through **/api** (don't include the last **/negotiate?**). You'll use this value in the next step.
-
- :::image type="content" source="media/how-to-integrate-azure-signalr/get-function-url.png" alt-text="Screenshot of the Azure portal showing the 'negotiate' function with the 'Get function URL' button and the function URL highlighted.":::
-
-1. Using Visual Studio or any code editor of your choice, open the unzipped _**digitaltwins-signalr-webapp-sample-main**_ folder that you downloaded in the [Download the sample applications](#download-the-sample-applications) section.
-
-1. Open the *src/App.js* file, and replace the function URL in `HubConnectionBuilder` with the HTTP endpoint URL of the **negotiate** function that you saved in the previous step:
-
- ```javascript
- const hubConnection = new HubConnectionBuilder()
- .withUrl('<Function-URL>')
- .build();
- ```
-1. In Visual Studio's **Developer command prompt** or any command window on your machine, navigate to the *digitaltwins-signalr-webapp-sample-main\src* folder. Run the following command to install the dependent node packages:
-
- ```cmd
- npm install
- ```
-
-Next, set permissions in your function app in the Azure portal:
-1. In the Azure portal's [Function apps](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp) page, select your function app instance.
-
-1. Scroll down in the instance menu and select **CORS**. On the CORS page, add `http://localhost:3000` as an allowed origin by entering it into the empty box. Check the box for **Enable Access-Control-Allow-Credentials** and select **Save**.
-
- :::image type="content" source="media/how-to-integrate-azure-signalr/cors-setting-azure-function.png" alt-text="Screenshot of the Azure portal showing the CORS Setting in Azure Function.":::
-
-### Run the device simulator
-
-During the end-to-end tutorial prerequisite, you [configured the device simulator](tutorial-end-to-end.md#configure-and-run-the-simulation) to send data through an IoT Hub and to your Azure Digital Twins instance.
-
-Now, start the simulator project located in *digital-twins-samples-main\DeviceSimulator\DeviceSimulator.sln*. If you're using Visual Studio, you can open the project and then run it with this button in the toolbar:
--
-A console window will open and display simulated device temperature telemetry messages. These messages are being sent through your Azure Digital Twins instance, where they're then picked up by the Azure functions and SignalR.
-
-You don't need to do anything else in this console, but leave it running while you complete the next step.
-
-### See the results
-
-To see the results in action, start the SignalR integration web app sample. You can do so from any console window at the *digitaltwins-signalr-webapp-sample-main\src* location, by running this command:
-
-```cmd
-npm start
-```
-
-Running this command will open a browser window running the sample app, which displays a visual temperature gauge. Once the app is running, you should start seeing the temperature telemetry values from the device simulator that propagate through Azure Digital Twins being reflected by the web app in real time.
--
-## Clean up resources
-
-If you no longer need the resources created in this article, follow these steps to delete them.
-
-Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resources in a resource group with the [az group delete](/cli/azure/group#az-group-delete) command. Removing the resource group will also remove:
-* The Azure Digital Twins instance (from the end-to-end tutorial)
-* The IoT hub and the hub device registration (from the end-to-end tutorial)
-* The Event Grid topic and associated subscriptions
-* The Azure Functions app, including all three functions and associated resources like storage
-* The Azure SignalR instance
-
-> [!IMPORTANT]
-> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
-
-```azurecli-interactive
-az group delete --name <your-resource-group>
-```
-
-Finally, delete the project sample folders that you downloaded to your local machine (*digital-twins-samples-main.zip*, *digitaltwins-signalr-webapp-sample-main.zip*, and their unzipped counterparts).
-
-## Next steps
-
-In this article, you set up Azure functions with SignalR to broadcast Azure Digital Twins telemetry events to a sample client application.
-
-Next, learn more about Azure SignalR Service:
-* [What is Azure SignalR Service?](../azure-signalr/signalr-overview.md)
-
-Or read more about Azure SignalR Service Authentication with Azure Functions:
-* [Azure SignalR Service authentication](../azure-signalr/signalr-tutorial-authenticate-azure-functions.md)
dns Dns Import Export Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-import-export-portal.md
The following notes provide more details about the zone import process.
IN MX 10 mail.adatum.com. IN MX 20 mail2.adatum.com.
- dns1 IN A 5.4.3.2
- dns2 IN A 4.3.2.1
- server1 IN A 4.4.3.2
- server2 IN A 5.5.4.3
- ftp IN A 3.3.2.1
- IN A 3.3.3.2
+ dns1 IN A 203.0.113.2
+ dns2 IN A 203.0.113.1
+ server1 IN A 192.0.2.2
+ server2 IN A 192.0.2.3
+ ftp IN A 198.51.100.1
+ IN A 198.51.100.2
mail IN CNAME server1 mail2 IN CNAME server2 www IN CNAME server1
The following notes provide more details about the zone import process.
@ 3600 IN MX 20 mail2.adatum.com. ; A Records
- dns1 3600 IN A 5.4.3.2
- dns2 3600 IN A 4.3.2.1
- ftp 3600 IN A 3.3.2.1
- ftp 3600 IN A 3.3.3.2
- server1 3600 IN A 4.4.3.2
- server2 3600 IN A 5.5.4.3
+ dns1 3600 IN A 203.0.113.2
+ dns2 3600 IN A 203.0.113.1
+ ftp 3600 IN A 198.51.100.1
+ ftp 3600 IN A 198.51.100.2
+ server1 3600 IN A 192.0.2.2
+ server2 3600 IN A 192.0.2.3
; AAAA Records
energy-data-services How To Manage Legal Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-legal-tags.md
# How to manage legal tags
-In this article, you'll know how to manage legal tags in your Azure Data Manager for Energy instance. A Legal tag is the entity that represents the legal status of data in the Azure Data Manager for Energy instance. Legal tag is a collection of properties that governs how data can be ingested and consumed. A legal tag is required for data to be [ingested](concepts-csv-parser-ingestion.md) into your Azure Data Manager for Energy instance. It's also required for the [consumption](concepts-index-and-search.md) of the data from your Azure Data Manager for Energy instance. Legal tags are defined at a data partition level individually.
+In this article, you'll know what legal tags are and how to manage them in your Azure Data Manager for Energy instance.
-While in Azure Data Manager for Energy instance, [entitlement service](concepts-entitlements.md) defines access to data for a given user(s), legal tag defines the overall access to the data across users. A user may have access to manage the data within a data partition however, they may not be able to do so-until certain legal requirements are fulfilled.
+A [legal tag](https://osdu.pages.opengroup.org/platform/security-and-compliance/legal/) is the entity that represents the legal status of data ingestion and [entitlement service](concepts-entitlements.md) defines user access to data. A user may have access to manage the data using entitlements but need to fulfill certain legal requirements using legal tags. Legal tag is a collection of required properties that governs how data can be [ingested](concepts-csv-parser-ingestion.md) into your Azure Data Manager for Energy instance.
+
+The Azure Data Manager for Energy instance allows creation of legal tags only for `countryOfOrigin` that are allowed as per the configuration file [DefaultCountryCodes.json](https://community.opengroup.org/osdu/platform/security-and-compliance/legal/-/blob/master/legal-core/src/main/resources/DefaultCountryCode.json?ref_type=heads) at a data partition level. OSDU has defined this file and you can't edit it.
## Create a legal tag
-Run the below curl command in Azure Cloud Bash to create a legal tag for a given data partition of your Azure Data Manager for Energy instance.
+Run the curl command in Azure Cloud Bash to create a legal tag for a given data partition of your Azure Data Manager for Energy instance.
```bash curl --location --request POST 'https://<URI>/api/legal/v1/legaltags' \
Run the below curl command in Azure Cloud Bash to create a legal tag for a given
``` ### Sample request
-Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1"
+Consider an Azure Data Manager for Energy instance named `medstest` with a data partition named "dp1":
```bash curl --location --request POST 'https://medstest.energy.azure.com/api/legal/v1/legaltags' \
Consider an Azure Data Manager for Energy instance named "medstest" with a data
The country of origin should follow [ISO Alpha2 format](https://www.nationsonline.org/oneworld/country_code_list.htm).
-The Create Legal Tag api, internally appends data-partition-id to legal tag name if it isn't already present. For instance, if request has name as: ```legal-tag```, then the create legal tag name would be ```<instancename>-<data-partition-id>-legal-tag```
+This API internally appends `data-partition-id` to legal tag name if it isn't already present. For instance, if request has name as: ```legal-tag```, then the create legal tag name would be ```<instancename>-<data-partition-id>-legal-tag```.
```bash curl --location --request POST 'https://medstest.energy.azure.com/api/legal/v1/legaltags' \
The Create Legal Tag api, internally appends data-partition-id to legal tag name
}' ```
-The sample response will have data-partition-id appended to the legal tag name and sample response will be:
+The sample response has `data-partition-id` appended to the legal tag name.
```JSON
The sample response will have data-partition-id appended to the legal tag name a
``` ## Get a legal tag
-Run the below curl command in Azure Cloud Bash to get the legal tag associated with a data partition of your Azure Data Manager for Energy instance.
+Run the curl command in Azure Cloud Bash to get the legal tag associated with a data partition of your Azure Data Manager for Energy instance.
```bash curl --location --request GET 'https://<URI>/api/legal/v1/legaltags/<legal-tag-name>' \
Run the below curl command in Azure Cloud Bash to get the legal tag associated w
``` ### Sample request
-Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1"
+Consider an Azure Data Manager for Energy instance named `medstest` with a data partition named "dp1":
```bash curl --location --request GET 'https://medstest.energy.azure.com/api/legal/v1/legaltags/medstest-dp1-legal-tag' \
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 09/22/2024 Last updated : 10/03/2024
The following table shows connectivity locations and the service providers for e
| Location | Address | Zone | Local Azure regions | ER Direct | Service providers | |--|--|--|--|--|--|
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/)<br/>[Equinix DA6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/dallas-data-centers/da6) | 1 | &cross; | &check; | Aryaka Networks<br/>AT&T Connectivity Plus<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>Cologix<br/>Cox Business Cloud Port<br/>Equinix<br/>GTT<br/>Intercloud<br/>Internet2<br/>Level 3 Communications<br/>MCM Telecom<br/>Megaport<br/>Momentum Telecom<br/>Neutrona Networks<br/>Orange<br/>PacketFabric<br/>Telmex Uninet<br/>Telia Carrier<br/>Telefonica<br/>Transtelco<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/)<br/>[Equinix DA6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/dallas-data-centers/da6) | 1 | &cross; | &check; | Aryaka Networks<br/>AT&T Connectivity Plus<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>Cologix<br/>Cox Business Cloud Port<br/>Equinix<br/>Flo Networks<br/>GTT<br/>Intercloud<br/>Internet2<br/>Level 3 Communications<br/>MCM Telecom<br/>Megaport<br/>Momentum Telecom<br/>Orange<br/>PacketFabric<br/>Telmex Uninet<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Vodafone<br/>Zayo |
| **Dallas2** | [Digital Realty DFW10](https://www.digitalrealty.com/data-centers/americas/dallas/dfw10) | 1 | &cross; | &check; | Digital Realty<br/>Momentum Telecom | | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | &check; | CoreSite<br/>Megaport<br/>Momentum Telecom<br/>PacketFabric<br/>Zayo | | **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | &check; | Ooredoo Cloud Connect<br/>Vodafone |
The following table shows connectivity locations and the service providers for e
| **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | &cross; | &check; | CenturyLink Cloud Connect<br/>Megaport<br/>PacketFabric | | **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | &check; | AT&T NetBond<br/>Bezeq International<br/>British Telecom<br/>CenturyLink<br/>Colt<br/>Equinix<br/>euNetworks<br/>Intelsat<br/>InterCloud<br/>Internet Solutions - Cloud Connect<br/>Interxion (Digital Realty)<br/>Jisc<br/>Level 3 Communications<br/>Megaport<br/>Momentum Telecom<br/>MTN<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telehouse - KDDI<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo | | **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | &check; | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Epsilon Global Communications<br/>GTT<br/>Interxion (Digital Realty)<br/>IX Reach<br/>JISC<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Ooredoo Cloud Connect<br/>Orange<br/>SES<br/>Sohonet<br/>Tata Communications<br/>Telehouse - KDDI<br/>Zayo<br/>Vodafone |
-| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | &cross; | &check; | AT&T Dynamic Exchange<br/>CoreSite<br/>China Unicom Global<br/>Cloudflare<br/> Megaport<br/>Momentum Telecom<br/>Neutrona Networks<br/>NTT<br/>Zayo</br></br> |
+| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | &cross; | &check; | AT&T Dynamic Exchange<br/>CoreSite<br/>China Unicom Global<br/>Cloudflare<br/>Flo Networks<br/>Megaport<br/>Momentum Telecom<br/>NTT<br/>Zayo</br></br> |
| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | &cross; | &check; | Crown Castle<br/>Equinix<br/>GTT<br/>PacketFabric | | **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | &cross; | &check; | DE-CIX<br/>GTT<br/>InterCloud<br/>Interxion (Digital Realty)<br/>Megaport<br/>Telefonica | | **Madrid2** | [Equinix MD2](https://www.equinix.com/data-centers/europe-colocation/spain-colocation/madrid-data-centers/md2) | 1 | &cross; | &check; | Equinix<br/>GÉANT<br/>Intercloud | | **Marseille** | [Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | &cross; | Colt<br/>DE-CIX<br/>GEANT<br/>Interxion (Digital Realty)<br/>Jaguar Network<br/>Ooredoo Cloud Connect | | **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | &check; | AARNet<br/>Devoli<br/>Equinix<br/>Megaport<br/>NETSG<br/>NEXTDC<br/>Optus<br/>Orange<br/>Telstra Corporation<br/>TPG Telecom |
-| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | &cross; | &check; | AT&T Dynamic Exchange<br/>Claro<br/>C3ntro<br/>Equinix<br/>Megaport<br/>Momentum Telecom<br/>Neutrona Networks<br/>PitChile |
+| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | &cross; | &check; | AT&T Dynamic Exchange<br/>Claro<br/>C3ntro<br/>Equinix<br/>Flo Networks<br/>Megaport<br/>Momentum Telecom<br/>PitChile |
| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | Italy North | &check; | Colt<br/>Equinix<br/>Fastweb<br/>IRIDEOS<br/>Noovle<br/>Retelit<br/>Vodafone | | **Milan2** | [DATA4](https://www.data4group.com/it/data-center-a-milano-italia/) | 1 | Italy North | &check; | | | **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) and [Cologix MIN3](https://www.cologix.com/data-centers/minneapolis/min3/) | 1 | &cross; | &check; | Cologix<br/>Megaport<br/>Zayo |
The following table shows connectivity locations and the service providers for e
| **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | &check; | | | **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | &check; | Airtel<br/>Lightstorm<br/>SIFY<br/>Tata Communications | | **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | &check; | Bell Canada<br/>Equinix<br/>Megaport<br/>RISQ<br/>Telus |
-| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | &cross; | &check; | Cirion Technologies<br/>Equinix<br/>KIO<br/>MCM Telecom<br/>Megaport<br/>Transtelco |
+| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | &cross; | &check; | Cirion Technologies<br/>Equinix<br/>Flo Networks<br/>KIO<br/>MCM Telecom<br/>Megaport |
| **Quincy** | Sabey Datacenter - Building A | 1 | West US 2 | &check; | |
The following table shows connectivity locations and the service providers for e
| **Tel Aviv** | Bezeq International | 2 | Israel Central | &check; | Bezeq International | | **Tel Aviv2** | SDS | 2 | Israel Central | &check; | | | **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | &check; | Aryaka Networks<br/>AT&T NetBond<br/>BBIX<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Intercloud<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT EAST<br/>Orange<br/>Softbank<br/>Telehouse - KDDI<br/>Verizon </br></br> |
-| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | &check; | AT TOKYO<br/>China Telecom Global<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Digital Realty<br/>Equinix<br/>IPC<br/>IX Reach<br/>Megaport<br/>PCCW Global Limited<br/>Tokai Communications |
+| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | &check; | AT TOKYO<br/>BBIX<br/>China Telecom Global<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Digital Realty<br/>Equinix<br/>IPC<br/>IX Reach<br/>Megaport<br/>PCCW Global Limited<br/>Tokai Communications |
| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | &check; | NEC<br/>SCSK | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | &check; | AT&T NetBond<br/>Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Equinix<br/>IX Reach Megaport<br/>Orange<br/>Telus<br/>Verizon<br/>Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | &check; | Fibrenoire<br/>Zayo | | **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | &cross; | &check; | Bell Canada<br/>Cologix<br/>Megaport<br/>Telus<br/>Zayo | | **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | &check; | Equinix<br/>Exatel<br/>Orange Poland<br/>T-mobile Poland |
-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | &check; | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Digital Realty<br/>Equinix<br/>IPC<br/>Internet2<br/>InterCloud<br/>IPC<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Momentum Telecom<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo |
+| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | &check; | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Digital Realty<br/>Equinix<br/>Flo Networks<br/>IPC<br/>Internet2<br/>InterCloud<br/>IPC<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Momentum Telecom<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Telefonica<br/>Verizon<br/>Zayo |
| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US<br/>East US 2 | &cross; | CenturyLink Cloud Connect<br/>Coresite<br/>Intelsat<br/>Megaport<br/>Momentum Telecom<br/>Viasat<br/>Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | &check; | Colt<br/>Equinix<br/>Intercloud<br/>Interxion (Digital Realty)<br/>Megaport<br/>Swisscom<br/>Zayo | | **Zurich2** | [Equinix ZH5](https://www.equinix.com/data-centers/europe-colocation/switzerland-colocation/zurich-data-centers/zh5) | 1 | Switzerland North | &check; | Equinix |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 09/22/2024 Last updated : 10/03/2024
The following table shows locations by service provider. If you want to view ava
| **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | &check; | &check; | Taipei | | **[Fastweb](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | &check; |&check; | Milan | | **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** | &check; | &check; | Montreal<br/>Quebec City<br/>Toronto2 |
+| **[Flo Networks](https://flo.net/)** | &check; | &check; | Dallas<br/>Los Angeles<br/>Miami<br/>Queretaro(Mexico City)<br/>Sao Paulo<br/>Washington DC **Locations are listed under Neurtrona Networks and Transtelco as providers for circuit creation* |
| **[GBI](https://www.gbiinc.com/microsoft-azure/)** | &check; | &check; | Dubai2<br/>Frankfurt | | **[GÉANT](https://www.geant.org/Networks)** | &check; | &check; | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>Madrid2<br/>Marseille | | **[GlobalConnect](https://www.globalconnect.no/)** | &check; | &check; | Amsterdam<br/>Copenhagen<br/>Oslo<br/>Stavanger<br/>Stockholm |
The following table shows locations by service provider. If you want to view ava
| **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** | &check; | &check; | Bangkok | | **NEC** | &check; | &check; | Tokyo3 | | **[NETSG](https://www.netsg.co/dc-cloud/cloud-and-dc-interconnect/)** | &check; | &check; | Melbourne<br/>Sydney2 |
-| **[Neutrona Networks](https://flo.net/)** | &check; | &check; | Dallas<br/>Los Angeles<br/>Miami<br/>Sao Paulo<br/>Washington DC |
| **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** | &check; | &check; | Newport(Wales) | | **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** | &check; | &check; | Melbourne<br/>Perth<br/>Sydney<br/>Sydney2 | | **NL-IX** | &check; | &check; | Amsterdam2<br/>Dublin2 |
The following table shows locations by service provider. If you want to view ava
| **[Tivit](https://tivit.com/en/home-ingles/)** |&check; |&check; | Sao Paulo2 | | **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | &check; | &check; | Osaka<br/>Tokyo2 | | **TPG Telecom**| &check; | &check; | Melbourne<br/>Sydney |
-| **[Transtelco](https://transtelco.net/enterprise-services/)** | &check; | &check; | Dallas<br/>Queretaro(Mexico City)|
| **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking)** |&check; |&check; | Chicago<br/>Silicon Valley<br/>Washington DC | | **[T-Mobile Poland](https://biznes.t-mobile.pl/pl/produkty-i-uslugi/sieci-teleinformatyczne/cloud-on-edge)** |&check; |&check; | Warsaw | | **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | &check; | &check; | Frankfurt |
governance Export Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/export-resources.md
Title: Export Azure Policy resources description: Learn to export Azure Policy resources to GitHub, such as policy definitions and policy assignments. Previously updated : 09/30/2024 Last updated : 10/03/2024 -+ ms.devlang: azurecli
ms.devlang: azurecli
This article provides information on how to export your existing Azure Policy resources. Exporting your resources is useful and recommended for backup, but is also an important step in your journey with Cloud Governance and treating your [policy-as-code](../concepts/policy-as-code.md). Azure Policy resources can be exported through [REST API](/rest/api/policy), [Azure CLI](#export-with-azure-cli), and [Azure PowerShell](#export-with-azure-powershell). > [!NOTE]
-> The portal experience for exporting definitions to GitHub was deprecated in April 2023.
+> The portal experience for exporting definitions to GitHub was deprecated in April 2023.
## Export with Azure CLI
Azure Policy definitions, initiatives, and assignments can each be exported as J
- Assignment - [Get-AzPolicyAssignment](/powershell/module/az.resources/get-azpolicyassignment). ```azurepowershell-interactive
-Get-AzPolicyDefinition -Name 'b2982f36-99f2-4db5-8eff-283140c09693' | ConvertTo-Json -Depth 10
+Get-AzPolicyDefinition -Name 'b2982f36-99f2-4db5-8eff-283140c09693' | Select-Object -Property * | ConvertTo-Json -Depth 10
``` ## Export to CSV with Resource Graph in Azure portal
-Azure Resource Graph gives the ability to query at scale with complex filtering, grouping and sorting. Azure Resource Graph supports the policy resources table, which contains policy resources such as definitions, assignments and exemptions. Review our [sample queries.](../samples/resource-graph-samples.md#azure-policy). The Resource Graph explorer portal experience allows downloads of query results to CSV using the ["Download to CSV"](../../resource-graph/first-query-portal.md#download-query-results-as-a-csv-file) toolbar option.
-
+Azure Resource Graph gives the ability to query at scale with complex filtering, grouping and sorting. Azure Resource Graph supports the policy resources table, which contains policy resources such as definitions, assignments and exemptions. Review our [sample queries.](../samples/resource-graph-samples.md#azure-policy). The Resource Graph explorer portal experience allows downloads of query results to CSV using the ["Download to CSV"](../../resource-graph/first-query-portal.md#download-query-results-as-a-csv-file) toolbar option.
## Next steps
governance Create Custom Policy Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/create-custom-policy-definition.md
Before creating the policy definition, it's important to understand the intent o
Your requirements should clearly identify both the "to be" and the "not to be" resource states.
-While we defined the expected state of the resource, we havenn't defined what we want done with non-compliant resources. Azure Policy supports many [effects](../concepts/effect-basics.md). For this tutorial, we define the business requirement as preventing the creation of resources if they aren't compliant with the business rules. To meet this goal, we use the [deny](../concepts/effect-deny.md) effect. We also want the option to suspend the policy for specific assignments. Use the [disabled](../concepts/effect-disabled.md) effect and make the effect a [parameter](../concepts/definition-structure-parameters.md) in the policy definition.
+While we defined the expected state of the resource, we haven't defined what we want done with non-compliant resources. Azure Policy supports many [effects](../concepts/effect-basics.md). For this tutorial, we define the business requirement as preventing the creation of resources if they aren't compliant with the business rules. To meet this goal, we use the [deny](../concepts/effect-deny.md) effect. We also want the option to suspend the policy for specific assignments. Use the [disabled](../concepts/effect-disabled.md) effect and make the effect a [parameter](../concepts/definition-structure-parameters.md) in the policy definition.
## Determine resource properties
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.scvmm/virtualnetworks - microsoft.scvmm/vmmservers - Microsoft.Search/searchServices (Search services)
+- microsoft.security/apicollections
+- microsoft.security/apicollections/apiendpoints
- microsoft.security/assignments - microsoft.security/automations - microsoft.security/customassessmentautomations
For sample queries for this table, see [Resource Graph sample queries for servic
- Learn more about the [query language](../concepts/query-language.md). - Learn more about how to [explore resources](../concepts/explore-resources.md).-- See samples of [Starter queries](../samples/starter.md).
+- See samples of [Starter queries](../samples/starter.md).
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
# Get access token for Azure API for FHIR using Azure CLI
-In this article, you'll learn how to obtain an access token for the Azure API for FHIR using the Azure CLI. When you [provision the Azure API for FHIR](fhir-paas-portal-quickstart.md), you configure a set of users or service principals that have access to the service. If your user object ID is in the list of allowed object IDs, you can access the service using a token obtained using the Azure CLI.
+In this article, you learn how to obtain an access token for the Azure API for FHIR&reg; using the Azure CLI. When you [provision the Azure API for FHIR](fhir-paas-portal-quickstart.md), you configure a set of users or service principals that have access to the service. If your user object ID is in the list of allowed object IDs, you can access the service using a token obtained using the Azure CLI.
## Obtain a token
-The Azure API for FHIR uses a `resource` or `Audience` with URI equal to the URI of the FHIR server `https://<FHIR ACCOUNT NAME>.azurehealthcareapis.com`. You can obtain a token and store it in a variable (named `$token`) with the following command:
+The Azure API for FHIR uses a `resource` or `Audience` with a URI equal to the URI of the FHIR server `https://<FHIR ACCOUNT NAME>.azurehealthcareapis.com`. You can obtain a token and store it in a variable (named `$token`) with the following command.
```azurecli-interactive $token=$(az account get-access-token --resource=https://<FHIR ACCOUNT NAME>.azurehealthcareapis.com --query accessToken --output tsv)
curl -X GET --header "Authorization: Bearer $token" https://<FHIR ACCOUNT NAME>.
## Next steps
-In this article, you've learned how to obtain an access token for the Azure API for FHIR using the Azure CLI. To learn how to access the FHIR API using Postman, proceed to the Postman tutorial.
+In this article, you learned how to obtain an access token for the Azure API for FHIR using the Azure CLI. To learn how to access the FHIR API using Postman, proceed to the Postman tutorial:
>[!div class="nextstepaction"] >[Access the FHIR service using Postman](./../fhir/use-postman.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Get Started With Azure Api Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-started-with-azure-api-fhir.md
# Get started with Azure API for FHIR
-This article outlines the basic steps to get started with Azure API for FHIR. Azure API for FHIR is a managed, standards-based, compliant API for clinical health data that enables solutions for actionable analytics and machine learning.
+This article outlines the basic steps to get started with Azure API for FHIR&reg;. Azure API for FHIR is a managed, standards-based, compliant API for clinical health data that enables solutions for actionable analytics and machine learning.
-As a prerequisite, you'll need an Azure subscription and have been granted proper permissions to create Azure resource groups and deploy Azure resources. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+As a prerequisite, you need an Azure subscription and have been granted proper permissions to create Azure resource groups and deploy Azure resources. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
[![Screenshot of Azure API for FHIR flow diagram.](media/get-started/get-started-azure-api-fhir-diagram.png)](media/get-started/get-started-azure-api-fhir-diagram.png#lightbox)
To get started with Azure API for FHIR, you must [create a resource](https://por
[![Screenshot of the Azure search services and marketplace text box.](media/get-started/search-services-marketplace.png)](media/get-started/search-services-marketplace.png#lightbox)
-After youΓÇÖve located the Azure API for FHIR resource, select **Create**.
+After you locate the Azure API for FHIR resource, select **Create**.
[![Screenshot of the create Azure API for FHIR resource button.](media/get-started/create-azure-api-for-fhir-resource.png)](media/get-started/create-azure-api-for-fhir-resource.png#lightbox)
Refer to the steps in the [Quickstart guide](fhir-paas-portal-quickstart.md) for
## Accessing Azure API for FHIR
-When you're working with healthcare data, it's important to ensure that the data is secure, and it can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. Azure API for FHIR is secured using [Microsoft Entra ID](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. [Microsoft Entra identity configuration for Azure API for FHIR](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md) provides an overview of FHIR server authorization, and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, this article will walk you through Azure API for FHIR as the FHIR server and Microsoft Entra ID as our identity provider. For more information about accessing Azure API for FHIR, see [Access control overview](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md#access-control-overview).
+When you're working with healthcare data, it's important to ensure that the data is secure, and can't be accessed by unauthorized users or applications. FHIR servers use [OAuth 2.0](https://oauth.net/2/) to ensure this data security. Azure API for FHIR is secured using [Microsoft Entra ID](../../active-directory/index.yml), which is an example of an OAuth 2.0 identity provider. [Microsoft Entra identity configuration for Azure API for FHIR](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md) provides an overview of FHIR server authorization, and the steps needed to obtain a token to access a FHIR server. While these steps apply to any FHIR server and any identity provider, this article covers Azure API for FHIR as the FHIR server, and Microsoft Entra ID as our identity provider. For more information about accessing Azure API for FHIR, see [Access control overview](././../azure-api-for-fhir/azure-active-directory-identity-configuration.md#access-control-overview).
### Access token validation
-How Azure API for FHIR validates the access token will depend on implementation and configuration. The article [Azure API for FHIR access token validation](azure-api-fhir-access-token-validation.md) will guide you through the validation steps, which can be helpful when troubleshooting access issues.
+How Azure API for FHIR validates the access token depends on the implementation and configuration. The article [Azure API for FHIR access token validation](azure-api-fhir-access-token-validation.md) guides you through the validation steps, which can be helpful when troubleshooting access issues.
### Register a client application For an application to interact with Microsoft Entra ID, it needs to be registered. In the context of the FHIR server, there are two kinds of application registrations: -- Resource application registrations-- Client application registrations
+- Resource application registrations,
+- Client application registrations.
For more information about the two kinds of application registrations, see [Register the Microsoft Entra apps for Azure API for FHIR](fhir-app-registration.md).
This article described the basic steps to get started using Azure API for FHIR.
>[!div class="nextstepaction"] >[Frequently asked questions about Azure API for FHIR](fhir-faq.yml)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Do Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-do-custom-search.md
# Defining custom search parameters for Azure API for FHIR
-The Fast Healthcare Interoperability Resources (FHIR&#174;) specification defines a set of search parameters for all resources and search parameters that are specific to a resource(s). However, there are scenarios where you might want to search against an element in a resource that isnΓÇÖt defined by the FHIR specification as a standard search parameter. This article describes how you can define your own [search parameters](https://www.hl7.org/fhir/searchparameter.html) to be used in the Azure API for FHIR.
+The Fast Healthcare Interoperability Resources (FHIR&reg;) specification defines a set of search parameters for all resources and search parameters that are specific to a resource. However, there are scenarios where you might want to search against an element in a resource that isnΓÇÖt defined as a standard search parameter by the FHIR specification. This article describes how you can define your own [search parameters](https://www.hl7.org/fhir/searchparameter.html) to be used in the Azure API for FHIR.
> [!NOTE]
-> Each time you create, update, or delete a search parameter youΓÇÖll need to run a [reindex job](how-to-run-a-reindex.md) to enable the search parameter to be used in production. Below we will outline how you can test search parameters before reindexing the entire FHIR server.
+> Each time you create, update, or delete a search parameter youΓÇÖll need to run a [reindex job](how-to-run-a-reindex.md) to enable the search parameter to be used in production. This article will outline how you can test search parameters before reindexing the entire FHIR server.
## Create new search parameter
-To create a new search parameter, you `POST` the `SearchParameter` resource to the database. The code example below shows how to add the [US Core Race SearchParameter](http://hl7.org/fhir/us/core/STU3.1.1/SearchParameter-us-core-race.html) to the `Patient` resource.
+To create a new search parameter, you `POST` the `SearchParameter` resource to the database. The following code example shows how to add the [US Core Race SearchParameter](http://hl7.org/fhir/us/core/STU3.1.1/SearchParameter-us-core-race.html) to the `Patient` resource.
```rest POST {{FHIR_URL}}/SearchParameter
POST {{FHIR_URL}}/SearchParameter
``` > [!NOTE]
-> The new search parameter will appear in the capability statement of the FHIR server after you POST the search parameter to the database **and** reindex your database. Viewing the `SearchParameter` in the capability statement is the only way tell if a search parameter is supported in your FHIR server. If you can find the search parameter by searching for the search parameter but cannot see it in the capability statement, you still need to index the search parameter. You can POST multiple search parameters before triggering a reindex operation.
+> The new search parameter will appear in the capability statement of the FHIR server after you POST the search parameter to the database **and** reindex your database. Viewing the `SearchParameter` in the capability statement is the only way tell if a search parameter is supported in your FHIR server. If you can find the search parameter, but cannot see it in the capability statement, you still need to index the search parameter. You can POST multiple search parameters before triggering a reindex operation.
-Important elements of a `SearchParameter`:
+Important elements of a `SearchParameter` include the following.
-* **url**: A unique key to describe the search parameter. Many organizations, such as HL7, use a standard URL format for the search parameters that they define, as shown above in the US Core race search parameter.
+* **url**: A unique key to describe the search parameter. Many organizations, such as HL7, use a standard URL format for the search parameters that they define, as previously shown in the US Core race search parameter.
-* **code**: The value stored in **code** is what youΓÇÖll use when searching. For the example above, you would search with `GET {FHIR_URL}/Patient?race=<code>` to get all patients of a specific race. The code must be unique for the resource(s) the search parameter applies to.
+* **code**: The value stored in **code** is what you use when searching. For the preceding example, you would search with `GET {FHIR_URL}/Patient?race=<code>` to get all patients of a specific race. The code must be unique for the resource the search parameter applies to.
-* **base**: Describes which resource(s) the search parameter applies to. If the search parameter applies to all resources, you can use `Resource`; otherwise, you can list all the relevant resources.
+* **base**: Describes which resource the search parameter applies to. If the search parameter applies to all resources, you can use `Resource`; otherwise, you can list all the relevant resources.
* **type**: Describes the data type for the search parameter. Type is limited by the support for the Azure API for FHIR. This means that you canΓÇÖt define a search parameter of type Special or define a [composite search parameter](overview-of-search.md) unless it's a supported combination.
-* **expression**: Describes how to calculate the value for the search. When describing a search parameter, you must include the expression, even though it isn't required by the specification. This is because you need either the expression or the xpath syntax and the Azure API for FHIR ignores the xpath syntax.
+* **expression**: Describes how to calculate the value for the search. When describing a search parameter, you must include the expression, even though it isn't required by the specification. This is because you need either the expression or the xpath syntax, and the Azure API for FHIR ignores the xpath syntax.
## Test search parameters
-While you canΓÇÖt use the search parameters in production until you run a reindex job, there are a few ways to test your search parameters before reindexing the entire database.
+While you canΓÇÖt use the search parameters in production until you run a reindex job, you can test your search parameters before reindexing the entire database.
-First, you can test your new search parameter to see what values will be returned. By running the command below against a specific resource instance (by inputting their ID), you'll get back a list of value pairs with the search parameter name and the value stored for the specific patient. This will include all of the search parameters for the resource and you can scroll through to find the search parameter you created. Running this command won't change any behavior in your FHIR server.
+First, you can test your new search parameter to see what values are returned. By running the following command against a specific resource instance (by inputting their ID), you get a list of value pairs with the search parameter name and the value stored for the specific patient. This includes all of the search parameters for the resource. You can scroll through the returned list to find the search parameter you created. Running this command won't change any behavior in your FHIR server.
```rest GET https://{{FHIR_URL}}/{{RESOURCE}}/{{RESOUCE_ID}}/$reindex ```
-For example, to find all search parameters for a patient:
+For example, to find all search parameters for a patient, use the following.
```rest GET https://{{FHIR_URL}}/Patient/{{PATIENT_ID}}/$reindex ```
-The result will look like this:
+The result looks like the following.
```json {
The result will look like this:
}, ... ```
-Once you see that your search parameter is displaying as expected, you can reindex a single resource to test searching with the element. First you'll reindex a single resource:
+Once you see that your search parameter is displaying as expected, you can reindex a single resource to test searching with the element. First you reindex a single resource:
```rest POST https://{{FHIR_URL}/{{RESOURCE}}/{{RESOURCE_ID}}/$reindex ```
-Running this, sets the indices for any search parameters for the specific resource that you defined for that resource type. This does make an update to the FHIR server. Now you can search and set the use partial indices header to true, which means that it will return results where any of the resources has the search parameter indexed, even if not all resources of that type have it indexed.
+Running this sets the indices for any search parameters for the specific resource that you defined for that resource type. This does make an update to the FHIR server. Now you can search and set the use partial indices header to true, which means that it returns results where any of the resources which have the search parameter indexed, even if not all resources of that type have it indexed.
-Continuing with our example above, you could index one patient to enable the US Core Race `SearchParameter`:
+Continuing with our example, you could index one patient to enable the US Core Race `SearchParameter` as follows.
```rest POST https://{{FHIR_URL}/Patient/{{PATIENT_ID}}/$reindex ```
-And then search for patients that have a specific race:
+Then search for patients that have a specific race:
```rest GET https://{{FHIR_URL}}/Patient?race=2028-9 x-ms-use-partial-indices: true ```
-After you have tested and are satisfied that your search parameter is working as expected, run or schedule your reindex job so the search parameters can be used in the FHIR server for production use cases.
+After you're satisfied that your search parameter is working as expected, run or schedule your reindex job so the search parameters can be used in the FHIR server for production use cases.
## Update a search parameter To update a search parameter, use `PUT` to create a new version of the search parameter. You must include the `SearchParameter ID` in the `id` element of the body of the `PUT` request and in the `PUT` call. > [!NOTE]
-> If you don't know the ID for your search parameter, you can search for it. Using `GET {{FHIR_URL}}/SearchParameter` will return all custom search parameters, and you can scroll through the search parameter to find the search parameter you need. You could also limit the search by name. With the example below, you could search for name using `USCoreRace: GET {{FHIR_URL}}/SearchParameter?name=USCoreRace`.
+> If you don't know the ID for your search parameter, you can search for it. Using `GET {{FHIR_URL}}/SearchParameter` will return all custom search parameters, and you can scroll through the list to find the search parameter you need. You can also limit the search by name. With the following example, you could search for name using `USCoreRace: GET {{FHIR_URL}}/SearchParameter?name=USCoreRace`.
```rest PUT {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
PUT {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
```
-The result will be an updated `SearchParameter` and the version will increment.
+The result is an updated `SearchParameter` and the version increments.
> [!Warning] > Be careful when updating SearchParameters that have already been indexed in your database. Changing an existing SearchParameterΓÇÖs behavior could have impacts on the expected behavior. We recommend running a reindex job immediately. ## Delete a search parameter
-If you need to delete a search parameter, use the following:
+If you need to delete a search parameter, use the following.
```rest Delete {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
Delete {{FHIR_URL}}/SearchParameter/{SearchParameter ID}
## Next steps
-In this article, youΓÇÖve learned how to create a search parameter. Next you can learn how to reindex your FHIR server.
+In this article, you learned how to create a search parameter. Next you can learn how to reindex your FHIR server.
>[!div class="nextstepaction"] >[How to run a reindex job](how-to-run-a-reindex.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md
# Running a reindex job in Azure API for FHIR
-There are scenarios where you may have search or sort parameters in the Azure API for FHIR that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers how to run a reindex job to index search parameters that haven't yet been indexed in your FHIR service database.
+There are scenarios where you may have search or sort parameters in the Azure API for FHIR&reg; that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers how to run a reindex job to index search parameters in your FHIR service database.
> [!Warning] > It's important that you read this entire article before getting started. A reindex job can be very performance intensive. This article includes options for how to throttle and control the reindex job. ## How to run a reindex job
-Reindex job can be executed against entire FHIR service database and against specific custom search parameter.
+A reindex job can be executed against and entire FHIR service database and against specific custom search parameters.
### Run reindex job on entire FHIR service database
-To run reindex job, use the following `POST` call with the JSON formatted `Parameters` resource in the request body:
+To run a reindex job, use the following `POST` call with the JSON formatted `Parameters` resource in the request body.
```json POST {{FHIR URL}}/$reindex
POST {{FHIR URL}}/$reindex
Leave the `"parameter": []` field blank (as shown) if you don't need to tweak the resources allocated to the reindex job.
-If the request is successful, you receive a **201 Created** status code in addition to a `Parameters` resource in the response.
+If the request is successful, you receive a **201 Created** status code in addition to a `Parameters` resource in the response, as in the following example.
```json HTTP/1.1 201 Created
Content-Location: https://{{FHIR URL}}/_operations/reindex/560c7c61-2c70-4c54-b8
} ``` ### Run reindex job against specific custom search parameter
-To run reindex job against specific custom search parameter, use the following `POST` call with the JSON formatted `Parameters` resource in the request body:
+To run a reindex job against a specific custom search parameter, use the following `POST` call with the JSON formatted `Parameters` resource in the request body.
```json POST {{FHIR_URL}}/$reindex
content-type: application/fhir+json
``` > [!NOTE]
-> To check the status of a reindex job or to cancel the job, you'll need the reindex ID. This is the `"id"` carried in the `"parameter"` value returned in the response. In the example above, the ID for the reindex job would be `560c7c61-2c70-4c54-b86d-c53a9d29495e`.
+> To check the status of a reindex job or to cancel the job, you'll need the reindex ID. This is the `"id"` carried in the `"parameter"` value returned in the response. In the preceding example, the ID for the reindex job would be `560c7c61-2c70-4c54-b86d-c53a9d29495e`.
## How to check the status of a reindex job
-Once youΓÇÖve started a reindex job, you can check the status of the job using the following call:
+Once you start a reindex job, you can check the status of the job using the following call.
`GET {{FHIR URL}}/_operations/reindex/{{reindexJobId}`
-An example response:
+Here's an example response.
```json {
An example response:
} ```
-The following information is shown in the above response:
+The following information is shown in the response.
* `totalResourcesToReindex`: Includes the total number of resources that are being reindexed in this job. * `resourcesSuccessfullyReindexed`: The total number of resources that have already been reindexed in this job.
-* `progress`: Reindex job percent complete. Equals `resourcesSuccessfullyReindexed`/`totalResourcesToReindex` x 100.
+* `progress`: Reindex job percent complete. Computed as `resourcesSuccessfullyReindexed`/`totalResourcesToReindex` x 100.
* `status`: States if the reindex job is queued, running, complete, failed, or canceled. * `resources`: Lists all the resource types impacted by the reindex job.
-* 'resourceReindexProgressByResource (CountReindexed of Count)': Provides reindexed count of the total count, per resource type. In cases where reindexing for a specific resource type is queued, only Count is provided.
+* `resourceReindexProgressByResource (CountReindexed of Count)`: Provides a reindexed count of the total count, per resource type. In cases where reindexing for a specific resource type is queued, only Count is provided.
-* 'searchParams': Lists url of the search parameters impacted by the reindex job.
+* `searchParams`: Lists url of the search parameters impacted by the reindex job.
## Delete a reindex job
If you need to cancel a reindex job, use a delete call and specify the reindex j
## Performance considerations
-A reindex job can be quite performance intensive. WeΓÇÖve implemented some throttling controls to help you manage how a reindex job will run on your database.
+A reindex job can be quite performance intensive. WeΓÇÖve implemented some throttling controls to help you manage how a reindex job runs on your database.
> [!NOTE]
-> It is not uncommon on large datasets for a reindex job to run for days. For a database with 30,000,000 million resources, we noticed that it took 4-5 days at 100K RUs to reindex the entire database.
+> It is not uncommon on large datasets for a reindex job to run for days. For a database with 30,000,000 resources, we noticed that it took 4-5 days at 100,000 request units (RUs) to reindex the entire database.
-Below is a table outlining the available parameters, defaults, and recommended ranges. You can use these parameters to either speedup the process (use more compute) or slow down the process (use less compute). For example, you could run the reindex job on a low traffic time and increase your compute to get it done quicker. Instead, you could use the settings to ensure a low usage of compute and have it run for days in the background.
+Following is a table outlining the available parameters, defaults, and recommended ranges. You can use these parameters to either speedup the process (use more compute) or slow down the process (use less compute). For example, you could run the reindex job at a low traffic time and increase your compute to get it done quicker. You could also use the settings to ensure a low usage of compute and have it run for days in the background.
| **Parameter** | **Description** | **Default** | **Available Range** | | | - | | - |
-| QueryDelayIntervalInMilliseconds | The delay between each batch of resources being kicked off during the reindex job. A smaller number will speed up the job while a higher number will slow it down. | 500 MS (.5 seconds) | 50-500000 |
+| QueryDelayIntervalInMilliseconds | The delay between each batch of resources being kicked off during the reindex job. A smaller number speeds up the job while a higher number slows it down. | 500 MS (.5 seconds) | 50-500000 |
| MaximumResourcesPerQuery | The maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-5000 | | MaximumConcurrency | The number of batches done at a time. | 1 | 1-10 | | targetDataStoreUsagePercentage | Allows you to specify what percent of your data store to use for the reindex job. For example, you could specify 50% and that would ensure that at most the reindex job would use 50% of available RUs on Azure Cosmos DB. | Not present, which means that up to 100% can be used. | 0-100 |
-If you want to use any of the parameters above, you can pass them into the Parameters resource when you start the reindex job.
+If you want to use any of the preceding parameters, you can pass them into the Parameters resource when you start the reindex job.
```json {
If you want to use any of the parameters above, you can pass them into the Param
## Next steps
-In this article, youΓÇÖve learned how to start a reindex job. To learn how to define new search parameters that require the reindex job, see
+In this article, you learned how to start a reindex job. To learn how to define new search parameters that require the reindex job, see
>[!div class="nextstepaction"] >[Defining custom search parameters](how-to-do-custom-search.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-best-practices.md
+
+ Title: FHIR service best practices
+description: Best practices for higher performance in FHIR service
+++++ Last updated : 10/01/2024+++
+# Best practices for better performance in FHIR service
+
+This document provides guidance on best practices with the Azure Health Data Services FHIR&reg; service. You'll find practices you should **Do**, **Consider**, or **Avoid** to better the performance of your FHIR service.
+
+> [!NOTE]
+> This document is scoped for Azure Health Data Services FHIR service customers.
+
+## Data ingestion
+
+### Import operation
+
+Azure FHIR service supports data ingestion through the import operation, which offers two modes: initial mode, and incremental mode. For detailed guidance, refer to the [Importing data into the FHIR service](import-data.md) documentation.
+
+* **Consider** using the import operation over HTTP API requests to ingest the data into FHIR service. The import operation provides a high throughput and is a scalable method for loading data.
+To achieve optimal performance with the import operation, consider the following best practices.
+
+* **Do** use large files while ingesting data. The optimal DNJSON file size for import is 50 MB or larger (or 20,000 resources or more, with no upper limit). Combining smaller files into larger ones can enhance performance.
+* **Consider** importing all FHIR resource files in a single import operation for optimal performance. Aim for a total file size of 100 GB or more (or 100 million resources, no upper limit) in one operation. Maximizing an import in this way helps reduce the overhead associated with managing multiple import jobs.
+* **Consider** running multiple concurrent imports only if necessary, but limit parallel import jobs. A single large import is designed to consume all available system resources, and processing throughput doesn't increase with concurrent import jobs.
+
+### Bundles
+
+In Azure FHIR service, bundles act as containers for multiple resources. Batch and transaction bundles enable users to submit sets of actions in a single HTTP request or response. Consider the following to achieve higher throughput with bundle ingestion.
+
+* **Do** tune the number of concurrent bundle requests to the FHIR server. A high number (>100) may lead to negative scaling and reduced processing throughput. The optimal concurrency is dependent on the complexity of the bundles and resources.
+* **Do** generate load on Azure FHIR service in a linear manner and avoid burst operations to prevent performance degradation.
+* **Consider** enabling parallel processing for batch and transaction bundles. By default, resources in bundles are processed sequentially. To enhance throughput, you can enable parallel resource processing by adding the HTTP header flag `x-bundle-processing-logic` and setting it to `parallel`. For more information, see the [batch bundle parallel processing documentation](rest-api-capabilities.md#bundle-parallel-processing).
+
+> [!NOTE]
+> Parallel bundle processing can enhance throughput when there isn't an implicit dependency on the order of resources within an HTTP operation.
+
+* **Consider** splitting resource entries across multiple bundles to increase parallelism, which can enhance throughput. Optimizing the number of resource entries in a bundle can reduce network time.
+* **Consider** using smaller bundle sizes for complex operations. Smaller transaction bundles can reduce errors and support data consistency. Use separate transaction bundles for FHIR resources that don't depend on each other, and can be updated separately.
+* **Avoid** submitting parallel bundle requests that attempt to update the same resources concurrently, which can cause delays in processing.
+
+### Search parameter index tuning
+
+Azure FHIR service is provisioned with predefined search parameters per resource. Search parameters are indexed for ease of use and efficient searching. Indexes are updated for every write on the FHIR service. [Selectable search parameters](selectable-search-parameters.md) allow you to enable or disable built-in search parameter indexes. This functionality helps you optimize storage use and performance by only enabling necessary search parameters. Focusing on relevant search parameters helps minimize the volume of data retrieved during ingestion.
+
+**Consider** disabling search indexes that your organization doesn't use to optimize performance.
+
+## Query performance optimization
+
+After data ingestion, optimizing query performance is crucial. To ensure optimal performance:
+
+* **Do** generate load on Azure FHIR service in a linear manner and, avoid burst operations to prevent performance degradation.
+* **Consider** using the most selective search parameters (for instance, `identifier`) over parameters with low cardinality to optimize index usage.
+* **Consider** performing deterministic searches using logical identifiers. FHIR service provides two ways to identify a resource: logical identifiers and business identifiers.<br>
+Logical Identifiers are considered "deterministic" because FHIR operations performed with them are predictable. Business Identifiers are considered "conditional" because their operations have different behavior depending on the state of the system. We recommend deterministic operations using logical identifiers.
+* **Consider** using the `PUT` HTTP verb instead of POST where applicable. `PUT` requests can help maintain data integrity and optimize resource management. `POST` requests can lead to duplication of resources, poor data quality, and increase FHIR data size unnecessarily.
+* **Avoid** the use of `_revinclude` in search queries, as they can result in unbounded result sets and higher latencies.
+* **Avoid** using complex searches (for example: `_has`, or chained search parameters), as they impact query performance.
+
+## Data extraction
+
+For data extraction, use the bulk `$export` operation as specified in the [HL7 FHIR Build Data Access specification](https://www.hl7.org/fhir/uv/bulkdata/).
+* **Do** use larger data blocks for system level exports when not using filters to maximize throughput. Azure FHIR service automatically splits them into parallel jobs.
+* **Consider** splitting Patient, Group, and filtered system exports into small data blocks for export.
+
+For more information on export operations, see [Export your FHIR data](export-data.md).
+
+By applying these best practices you can enhance the performance and efficiency of data ingestion, bundle processing, query execution, and data extraction in Azure FHIR service.
+
iot-operations Howto Configure Observability Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/configure-observability-monitoring/howto-configure-observability-manual.md
az provider register -n "Microsoft.AlertsManagement"
``` ## Install Azure Monitor managed service for Prometheus+ Azure Monitor managed service for Prometheus is a component of Azure Monitor Metrics. This managed service provides flexibility in the types of metric data that you can collect and analyze with Azure Monitor. Prometheus metrics share some features with platform and custom metrics. Prometheus metrics also use some different features to better support open-source tools such as PromQL and Grafana. Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution. This fully managed service is based on the Prometheus project from the Cloud Native Computing Foundation (CNCF). The service allows you to use the Prometheus query language (PromQL) to analyze and alert on the performance of monitored infrastructure and workloads, without having to operate the underlying infrastructure.
Azure Monitor managed service for Prometheus allows you to collect and analyze m
To set up Prometheus metrics collection for the new Arc-enabled cluster, follow the steps in [Configure Prometheus metrics collection](howto-configure-observability.md#configure-prometheus-metrics-collection). ## Install Container Insights+ Container Insights monitors the performance of container workloads deployed to the cloud. It gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and container logs are automatically collected through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the metrics database in Azure Monitor. Log data is sent to your Log Analytics workspace. To monitor container workload performance, complete the steps to [enable container insights](/azure/azure-monitor/containers/kubernetes-monitoring-enable). ## Install Grafana+ Azure Managed Grafana is a data visualization platform built on top of the Grafana software by Grafana Labs. Azure Managed Grafana is a fully managed Azure service operated and supported by Microsoft. Grafana helps you bring together metrics, logs and traces into a single user interface. With its extensive support for data sources and graphing capabilities, you can view and analyze your application and infrastructure telemetry data in real-time. Azure IoT Operations provides a collection of dashboards designed to give you many of the visualizations you need to understand the health and performance of your Azure IoT Operations deployment.
To install Azure Managed Grafana, complete the following steps:
1. Configure the dashboards by following the steps in [Deploy dashboards to Grafana](howto-configure-observability.md#deploy-dashboards-to-grafana).
+## Install OpenTelemetry (OTel) Collector
+
+OpenTelemetry Collector is a key component in the OpenTelemetry project, which is an open-source observability framework aimed at providing unified tracing, metrics, and logging for distributed systems. The collector is designed to receive, process, and export telemetry data from multiple sources, such as applications and infrastructure, and send it to a monitoring backend. This OTel Collector collects metrics from Azure IoT Operations and makes it available to use with other observability tooling like Azure Monitor managed service for Prometheus, Container Insights, and Grafana.
+
+To install OTel collector, complete the following steps:
++ ## Related content - [Deploy observability resources with a script](howto-configure-observability.md)
iot-operations Howto Configure Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/configure-observability-monitoring/howto-configure-observability.md
- ignite-2023 Previously updated : 02/27/2024 Last updated : 09/26/2024 # CustomerIntent: As an IT admin or operator, I want to be able to monitor and visualize data on the health of my industrial assets and edge environment.
Last updated 02/27/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-Observability provides visibility into every layer of your Azure IoT Operations configuration. It gives you insight into the actual behavior of issues, which increases the effectiveness of site reliability engineering. Azure IoT Operations offers observability through custom curated Grafana dashboards that are hosted in Azure. These dashboards are powered by Azure Monitor managed service for Prometheus and by Container Insights. This article shows you how to configure the services you need for observability.
+Observability provides visibility into every layer of your Azure IoT Operations configuration. It gives you insight into the actual behavior of issues, which increases the effectiveness of site reliability engineering. Azure IoT Operations offers observability through custom curated Grafana dashboards that are hosted in Azure. These dashboards are powered by Azure Monitor managed service for Prometheus and by Container Insights. This article shows you how to configure the services you need for observability.
## Prerequisites -- Azure IoT Operations Preview installed. For more information, see [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md).-- [Git](https://git-scm.com/downloads) for cloning the repository.
+* An Arc-enabled Kubernetes cluster.
+* Helm installed on your development machine. For instructions, see [Install Helm](https://helm.sh/docs/intro/install/).
+* Kubectl installed on your development machine. For instructions, see [Install Kubernetes tools](https://kubernetes.io/docs/tasks/tools/).
+* Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
## Configure your subscription
az provider register -n "Microsoft.AlertsManagement"
``` ## Install observability components
-The steps in this section install shared monitoring resources and configure your Arc enabled cluster to emit observability signals to these resources. The shared monitoring resources include Azure Managed Grafana, Azure Monitor Workspace, Azure Managed Prometheus, Azure Log Analytics, and Container Insights.
-
-1. In your console, go to the local folder where you want to clone the Azure IoT Operations repo:
- > [!NOTE]
- > The repo contains the deployment definition of Azure IoT Operations, and samples that include the sample dashboards used in this article.
-
-1. Clone the repo to your local machine, using the following command:
-
- ```shell
- git clone https://github.com/Azure/azure-iot-operations.git
- ```
-
-1. Browse to the following path in your local copy of the repo:
-
- *azure-iot-operations\tools\setup-3p-obs-infra*
-
-1. To deploy the observability components, run the following command. Use the subscription ID and resource group of your Arc-enabled cluster that you want to monitor.
-
- > [!NOTE]
- > To discover other optional parameters you can set, see the [bicep file](https://github.com/Azure/azure-iot-operations/blob/main/tools/setup-3p-obs-infra/observability-full.bicep). The optional parameters can specify things like alternative locations for cluster resources.
-
- ```azurecli
- az deployment group create \
- --subscription <subscription-id> \
- --resource-group <cluster-resource-group> \
- --template-file observability-full.bicep \
- --parameters grafanaAdminId=$(az ad user show --id $(az account show --query user.name --output tsv) --query=id --output tsv) \
- clusterName=<cluster-name> \
- sharedResourceGroup=<shared-resource-group> \
- sharedResourceLocation=<shared-resource-location> \
- --query=properties.outputs
- ```
-
- The previous command grants admin access for the newly created Grafana instance to the user who runs it. If that access isn't what you want, run the following command instead. You need to set up permissions manually before anyone can access the Grafana instance.
-
- ```azurecli
- az deployment group create \
- --subscription <subscription-id> \
- --resource-group <cluster-resource-group> \
- --template-file observability-full.bicep \
- --parameters clusterName=<cluster-name> \
- sharedResourceGroup=<shared-resource-group> \
- sharedResourceLocation=<shared-resource-location> \
- --query=properties.outputs
- ```
- To set up permissions manually, [add a role assignment](../../managed-grafan#add-a-grafana-role-assignment) to the Grafana instance for any users who should have access. Assign one of the Grafana roles (Grafana Admin, Grafana Editor, Grafana Viewer) depending on the level of access desired.
+The steps in this section deploy an [OpenTelemetry (OTel) Collector](https://opentelemetry.io/docs/collector/) and then install shared monitoring resources and configure your Arc enabled cluster to emit observability signals to these resources. The shared monitoring resources include Azure Managed Grafana, Azure Monitor Workspace, Azure Managed Prometheus, Azure Log Analytics, and Container Insights.
+
+### Deploy OpenTelemetry Collector
++
+### Deploy observability components
+
+- Deploy the observability components by running one of the following commands. Use the subscription ID and resource group of the Arc-enabled cluster that you want to monitor.
+
+ > [!NOTE]
+ > To discover other optional parameters you can set, see the [bicep file](https://github.com/Azure/azure-iot-operations/blob/main/tools/setup-3p-obs-infra/observability-full.bicep). The optional parameters can specify things like alternative locations for cluster resources.
+
+ The following command grants admin access for the newly created Grafana instance to the user:
-If the deployment succeeds, a few pieces of information are printed at the end of the command output. The information includes the Grafana URL and the resource IDs for both the Log Analytics and Azure Monitor resources that were created. The Grafana URL allows you to go to the Grafana instance that you configure in [Deploy dashboards to Grafana](#deploy-dashboards-to-grafana). The two resource IDs enable you to configure other Arc enabled clusters by following the steps in [Add an Arc-enabled cluster to existing observability infrastructure](howto-add-cluster.md).
+ ```azurecli
+ az deployment group create \
+ --subscription <subscription-id> \
+ --resource-group <cluster-resource-group> \
+ --template-file observability-full.bicep \
+ --parameters grafanaAdminId=$(az ad user show --id $(az account show --query user.name --output tsv) --query=id --output tsv) \
+ clusterName=<cluster-name> \
+ sharedResourceGroup=<shared-resource-group> \
+ sharedResourceLocation=<shared-resource-location> \
+ --query=properties.outputs
+ ```
+
+ If that access isn't what you want, the following command that doesn't configure permissions. Then, set up permissions manually using [role assignments](../../managed-grafan#add-a-grafana-role-assignment) before anyone can access the Grafana instance. Assign one of the Grafana roles (Grafana Admin, Grafana Editor, Grafana Viewer) depending on the level of access desired.
+
+ ```azurecli
+ az deployment group create \
+ --subscription <subscription-id> \
+ --resource-group <cluster-resource-group> \
+ --template-file observability-full.bicep \
+ --parameters clusterName=<cluster-name> \
+ sharedResourceGroup=<shared-resource-group> \
+ sharedResourceLocation=<shared-resource-location> \
+ --query=properties.outputs
+ ```
+
+ If the deployment succeeds, a few pieces of information are printed at the end of the command output. The information includes the Grafana URL and the resource IDs for both the Log Analytics and Azure Monitor resources that were created. The Grafana URL allows you to go to the Grafana instance that you configure in [Deploy dashboards to Grafana](#deploy-dashboards-to-grafana). The two resource IDs enable you to configure other Arc enabled clusters by following the steps in [Add an Arc-enabled cluster to existing observability infrastructure](howto-add-cluster.md).
## Configure Prometheus metrics collection
-1. Copy and paste the following configuration to a new file named *ama-metrics-prometheus-config.yaml*, and save the file:
-
- ```yml
- apiVersion: v1
- data:
- prometheus-config: |2-
- scrape_configs:
- - job_name: e4k
- scrape_interval: 1m
- static_configs:
- - targets:
- - aio-mq-diagnostics-service.azure-iot-operations.svc.cluster.local:9600
- - job_name: nats
- scrape_interval: 1m
- static_configs:
- - targets:
- - aio-dp-msg-store-0.aio-dp-msg-store-headless.azure-iot-operations.svc.cluster.local:7777
- - job_name: otel
- scrape_interval: 1m
- static_configs:
- - targets:
- - aio-otel-collector.azure-iot-operations.svc.cluster.local:8889
- - job_name: aio-annotated-pod-metrics
- kubernetes_sd_configs:
- - role: pod
- relabel_configs:
- - action: drop
- regex: true
- source_labels:
- - __meta_kubernetes_pod_container_init
- - action: keep
- regex: true
- source_labels:
- - __meta_kubernetes_pod_annotation_prometheus_io_scrape
- - action: replace
- regex: ([^:]+)(?::\\d+)?;(\\d+)
- replacement: $1:$2
- source_labels:
- - __address__
- - __meta_kubernetes_pod_annotation_prometheus_io_port
- target_label: __address__
- - action: replace
- source_labels:
- - __meta_kubernetes_namespace
- target_label: kubernetes_namespace
- - action: keep
- regex: 'azure-iot-operations'
- source_labels:
- - kubernetes_namespace
- scrape_interval: 1m
- kind: ConfigMap
- metadata:
- name: ama-metrics-prometheus-config
- namespace: kube-system
- ```
-
-1. To apply the configuration file you created, run the following command:
-
- `kubectl apply -f ama-metrics-prometheus-config.yaml`
+
+1. Copy and paste the following configuration to a new file named `ama-metrics-prometheus-config.yaml`, and save the file:
+
+ ```yml
+ apiVersion: v1
+ data:
+ prometheus-config: |2-
+ scrape_configs:
+ - job_name: e4k
+ scrape_interval: 1m
+ static_configs:
+ - targets:
+ - aio-internal-diagnostics-service.azure-iot-operations.svc.cluster.local:9600
+ - job_name: nats
+ scrape_interval: 1m
+ static_configs:
+ - targets:
+ - aio-dp-msg-store-0.aio-dp-msg-store-headless.azure-iot-operations.svc.cluster.local:7777
+ - job_name: otel
+ scrape_interval: 1m
+ static_configs:
+ - targets:
+ - aio-otel-collector.azure-iot-operations.svc.cluster.local:8889
+ - job_name: aio-annotated-pod-metrics
+ kubernetes_sd_configs:
+ - role: pod
+ relabel_configs:
+ - action: drop
+ regex: true
+ source_labels:
+ - __meta_kubernetes_pod_container_init
+ - action: keep
+ regex: true
+ source_labels:
+ - __meta_kubernetes_pod_annotation_prometheus_io_scrape
+ - action: replace
+ regex: ([^:]+)(?::\\d+)?;(\\d+)
+ replacement: $1:$2
+ source_labels:
+ - __address__
+ - __meta_kubernetes_pod_annotation_prometheus_io_port
+ target_label: __address__
+ - action: replace
+ source_labels:
+ - __meta_kubernetes_namespace
+ target_label: kubernetes_namespace
+ - action: keep
+ regex: 'azure-iot-operations'
+ source_labels:
+ - kubernetes_namespace
+ scrape_interval: 1m
+ kind: ConfigMap
+ metadata:
+ name: ama-metrics-prometheus-config
+ namespace: kube-system
+ ```
+
+1. Apply the configuration file by running the following command:
+
+ ```shell
+ kubectl apply -f ama-metrics-prometheus-config.yaml
+ ```
## Deploy dashboards to Grafana+ Azure IoT Operations provides a collection of dashboards designed to give you many of the visualizations you need to understand the health and performance of your Azure IoT Operations deployment.
-Complete the following steps to install the Azure IoT Operations curated Grafana dashboards.
+Complete the following steps to install the Azure IoT Operations curated Grafana dashboards.
1. Sign in to the Grafana console, then in the upper right area of the Grafana application, select the **+** icon
Complete the following steps to install the Azure IoT Operations curated Grafana
## Related content - [Azure Monitor overview](/azure/azure-monitor/overview)-- [How to Deploy observability resources manually](howto-configure-observability-manual.md)
+- [How to deploy observability resources manually](howto-configure-observability-manual.md)
iot-operations Concept Dataflow Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-dataflow-mapping.md
Previously updated : 08/03/2024 Last updated : 09/24/2024
+ai-usage: ai-assisted
#CustomerIntent: As an operator, I want to understand how to use the dataflow mapping language to transform data.
The transformations are achieved through *mapping*, which typically involves:
* **Input definition**: Identifying the fields in the input records that are used. * **Output definition**: Specifying where and how the input fields are organized in the output records.
-* **Conversion (optional)**: Modifying the input fields to fit into the output fields. Conversion is required when multiple input fields are combined into a single output field.
+* **Conversion (optional)**: Modifying the input fields to fit into the output fields. `expression` is required when multiple input fields are combined into a single output field.
The following mapping is an example:
The example maps:
Field references show how to specify paths in the input and output by using dot notation like `Employee.DateOfBirth` or accessing data from a contextual dataset via `$context(position)`.
+### MQTT user properties
+
+When you use MQTT as a source or destination, you can access MQTT user properties in the mapping language. User properties can be mapped in the input or output.
+
+In the following example, the MQTT `topic` property is mapped to the `origin_topic` field in the output.
+
+```yaml
+ inputs:
+ - $metadata.topic
+ output: origin_topic
+```
+
+You can also map MQTT properties to an output header. In the following example, the MQTT `topic` is mapped to the `origin_topic` field in the output's user property:
+
+```yaml
+ inputs:
+ - $metadata.topic
+ output: $metadata.user_property.origin_topic
+```
+ ## Contextualization dataset selectors These selectors allow mappings to integrate extra data from external databases, which are referred to as *contextualization datasets*.
In the previous example, the path consists of three segments: `Payload`, `Tag.10
```yaml - inputs:
- - 'Payload.He said: "No. It's done"'
+ - 'Payload.He said: "No. It is done"'
```
- In this case, the path is split into the segments `Payload`, `He said: "No`, and `It's done"` (starting with a space).
+ In this case, the path is split into the segments `Payload`, `He said: "No`, and `It is done"` (starting with a space).
### Segmentation algorithm
Let's consider a basic scenario to understand the use of asterisks in mappings:
```yaml - inputs:
- - *
- output: *
+ - '*'
+ output: '*'
``` Here's how the asterisk (`*`) operates in this context:
Mapping configuration that uses wildcards:
```yaml - inputs:
- - ColorProperties.*
- output: *
+ - 'ColorProperties.*'
+ output: '*'
- inputs:
- - TextureProperties.*
- output: *
+ - 'TextureProperties.*'
+ output: '*'
``` Resulting JSON:
When you place a wildcard, you must follow these rules:
* **At the beginning:** `*.path2.path3` - Here, the asterisk matches any segment that leads up to `path2.path3`. * **In the middle:** `path1.*.path3` - In this configuration, the asterisk matches any segment between `path1` and `path3`. * **At the end:** `path1.path2.*` - The asterisk at the end matches any segment that follows after `path1.path2`.
+* The path containing the asterisk must be enclosed in single quotation marks (`'`).
### Multi-input wildcards
Mapping configuration that uses wildcards:
```yaml - inputs:
- - *.Max # - $1
- - *.Min # - $2
- output: ColorProperties.*
- conversion: ($1 + $2) / 2
+ - '*.Max' # - $1
+ - '*.Min' # - $2
+ output: 'ColorProperties.*'
+ expression: ($1 + $2) / 2
``` Resulting JSON:
Initial mapping configuration that uses wildcards:
```yaml - inputs:
- - *.Max # - $1
- - *.Min # - $2
- - *.Avg # - $3
- - *.Mean # - $4
- output: ColorProperties.*
+ - '*.Max' # - $1
+ - '*.Min' # - $2
+ - '*.Avg' # - $3
+ - '*.Mean' # - $4
+ output: 'ColorProperties.*'
expression: ($1, $2, $3, $4) ```
Corrected mapping configuration:
```yaml - inputs:
- - *.Max # - $1
- - *.Min # - $2
- - *.Mid.Avg # - $3
- - *.Mid.Mean # - $4
- output: ColorProperties.*
+ - '*.Max' # - $1
+ - '*.Min' # - $2
+ - '*.Mid.Avg' # - $3
+ - '*.Mid.Mean' # - $4
+ output: 'ColorProperties.*'
expression: ($1, $2, $3, $4) ```
When you use the previous example from multi-input wildcards, consider the follo
```yaml - inputs:
- - *.Max # - $1
- - *.Min # - $2
- output: ColorProperties.*.Avg
+ - '*.Max' # - $1
+ - '*.Min' # - $2
+ output: 'ColorProperties.*.Avg'
expression: ($1 + $2) / 2 - inputs:
- - *.Max # - $1
- - *.Min # - $2
- output: ColorProperties.*.Diff
+ - '*.Max' # - $1
+ - '*.Min' # - $2
+ output: 'ColorProperties.*.Diff'
expression: abs($1 - $2) ```
Now, consider a scenario where a specific field needs a different calculation:
```yaml - inputs:
- - *.Max # - $1
- - *.Min # - $2
- output: ColorProperties.*
+ - '*.Max' # - $1
+ - '*.Min' # - $2
+ output: 'ColorProperties.*'
expression: ($1 + $2) / 2 - inputs:
Consider a special case for the same fields to help decide the right action:
```yaml - inputs:
- - *.Max # - $1
- - *.Min # - $2
- output: ColorProperties.*
+ - '*.Max' # - $1
+ - '*.Min' # - $2
+ output: 'ColorProperties.*'
expression: ($1 + $2) / 2 - inputs:
This mapping copies `BaseSalary` from the context dataset directly into the `Emp
```yaml - inputs:
- - $context(position).*
- output: Employment.*
+ - '$context(position).*'
+ output: 'Employment.*'
``` This configuration allows for a dynamic mapping where every field within the `position` dataset is copied into the `Employment` section of the output record:
This configuration allows for a dynamic mapping where every field within the `po
} ```
+## Last known value
+
+You can track the last known value of a property. Suffix the input field with `? $last` to capture the last known value of the field. When a property is missing a value in a subsequent input payload, the last known value is mapped to the output payload.
+
+For example, consider the following mapping:
+
+```yaml
+- inputs:
+ - Temperature ? $last
+ output: Thermostat.Temperature
+```
+
+In this example, the last known value of `Temperature` is tracked. If a subsequent input payload doesn't contain a `Temperature` value, the last known value is used in the output.
iot-operations Concept Schema Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-schema-registry.md
+
+ Title: Understand message schemas
+description: Learn how schema registry handles message schemas to work with Azure IoT Operations components including dataflows.
+++ Last updated : 09/23/2024+
+#CustomerIntent: As an operator, I want to understand how I can use message schemas to filter and transform messages.
++
+# Understand message schemas
+
+Schema registry, a feature provided by Azure Device Registry Preview, is a synchronized repository in the cloud and at the edge. The schema registry stores the definitions of messages coming from edge assets, and then exposes an API to access those schemas at the edge.
+
+The connector for OPC UA can create message schemas and add them to the schema registry or customers can upload schemas to the operations experience web UI or using ARM/Bicep templates.
+
+Edge services use message schemas to filter and transform messages as they're routed across your industrial edge scenario.
+
+*Schemas* are documents that describe the format of a message and its contents to enable processing and contextualization.
+
+## Message schema definitions
+
+Schema registry expects the following required fields in a message schema:
+
+| Required field | Definition |
+| -- | - |
+| `$schema` | Either `http://json-schema.org/draft-07/schema#` or `Delta/1.0`. In dataflows, JSON schemas are used for source endpoints and Delta schemas are used for destination endpoints. |
+| `type` | `Object` |
+| `properties` | The message definition. |
+
+### Sample schemas
+
+The following sample schemas provide examples for defining message schemas in each format.
+
+JSON:
+
+```json
+{
+ "$schema": "http://json-schema.org/draft-07/schema#",
+ "name": "foobarbaz",
+ "description": "A representation of an event",
+ "type": "object",
+ "required": [ "dtstart", "summary" ],
+ "properties": {
+ "summary": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string"
+ },
+ "url": {
+ "type": "string"
+ },
+ "duration": {
+ "type": "string",
+ "description": "Event duration"
+ }
+ }
+}
+```
+
+Delta:
+
+```delta
+{
+ "$schema": "Delta/1.0",
+ "type": "object",
+ "properties": {
+ "type": "struct",
+ "fields": [
+ { "name": "asset_id", "type": "string", "nullable": false, "metadata": {} },
+ { "name": "asset_name", "type": "string", "nullable": false, "metadata": {} },
+ { "name": "location", "type": "string", "nullable": false, "metadata": {} },
+ { "name": "manufacturer", "type": "string", "nullable": false, "metadata": {} },
+ { "name": "production_date", "type": "string", "nullable": false, "metadata": {} },
+ { "name": "serial_number", "type": "string", "nullable": false, "metadata": {} },
+ { "name": "temperature", "type": "double", "nullable": false, "metadata": {} }
+ ]
+ }
+}
+```
+
+## How dataflows use message schemas
+
+Message schemas are used in all three phases of a dataflow: defining the source input, applying data transformations, and creating the destination output.
+
+### Input schema
+
+Each dataflow source can optionally specify a message schema. If a schema is defined for a dataflow source, any incoming messages that don't match the schema are dropped.
+
+Asset sources have a predefined message schema that was created by the connector for OPC UA.
+
+Schemas can be uploaded for MQTT sources. Currently, Azure IoT Operations supports JSON for source schemas, also known as input schemas. In the operations experience, you can select an existing schema or upload one while defining an MQTT source:
++
+### Transformation
+
+The operations experience uses the input schema as a starting point for your data, making it easier to select transformations based on the known input message format.
+
+### Output schema
+
+Output schemas are associated with dataflow destinations are only used for dataflows that select local storage, Fabric, Azure Storage (ADLS Gen2), or Azure Data Explorer as the destination endpoint. Currently, Azure IoT Operations experience only supports Parquet output for output schemas.
+
+Note: The Delta schema format is used for both Parquet and Delta output.
+
+For these dataflows, the operations experience applies any transformations to the input schema then creates a new schema in Delta format. When the dataflow custom resource (CR) is created, it includes a `schemaRef` value that points to the generated schema stored in the schema registry.
iot-operations Howto Configure Adlsv2 Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-adlsv2-endpoint.md
+
+ Title: Configure dataflow endpoints for Azure Data Lake Storage Gen2
+description: Learn how to configure dataflow endpoints for Azure Data Lake Storage Gen2 in Azure IoT Operations.
++++ Last updated : 10/02/2024
+ai-usage: ai-assisted
+
+#CustomerIntent: As an operator, I want to understand how to configure dataflow endpoints for Azure Data Lake Storage Gen2 in Azure IoT Operations so that I can send data to Azure Data Lake Storage Gen2.
++
+# Configure dataflow endpoints for Azure Data Lake Storage Gen2
++
+To send data to Azure Data Lake Storage Gen2 in Azure IoT Operations Preview, you can configure a dataflow endpoint. This configuration allows you to specify the destination endpoint, authentication method, table, and other settings.
+
+## Prerequisites
+
+- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md)
+- A [configured dataflow profile](howto-configure-dataflow-profile.md)
+- A [Azure Data Lake Storage Gen2 account](../../storage/blobs/create-data-lake-storage-account.md)
+
+## Create an Azure Data Lake Storage Gen2 dataflow endpoint
+
+To configure a dataflow endpoint for Azure Data Lake Storage Gen2, we suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management. Alternatively, you can authenticate with the storage account using an access token. When using an access token, you would need to create a Kubernetes secret containing the SAS token.
+
+### Use managed identity authentication
+
+1. Get the managed identity of the Azure IoT Operations Preview Arc extension.
+
+1. Assign a role to the managed identity that grants permission to write to the storage account, such as *Storage Blob Data Contributor*. To learn more, see [Authorize access to blobs using Microsoft Entra ID](../../storage/blobs/authorize-access-azure-active-directory.md).
+
+1. Create the *DataflowEndpoint* resource and specify the managed identity authentication method.
+
+ ```yaml
+ apiVersion: connectivity.iotoperations.azure.com/v1beta1
+ kind: DataflowEndpoint
+ metadata:
+ name: adls
+ spec:
+ endpointType: DataLakeStorage
+ datalakeStorageSettings:
+ host: <account>.blob.core.windows.net
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+ ```
+
+If you need to override the system-assigned managed identity audience, see the [System-assigned managed identity](#system-assigned-managed-identity) section.
+
+### Use access token authentication
+
+1. Follow the steps in the [access token](#access-token) section to get a SAS token for the storage account and store it in a Kubernetes secret.
+
+1. Create the *DataflowEndpoint* resource and specify the access token authentication method.
+
+ ```yaml
+ apiVersion: connectivity.iotoperations.azure.com/v1beta1
+ kind: DataflowEndpoint
+ metadata:
+ name: adls
+ spec:
+ endpointType: DataLakeStorage
+ datalakeStorageSettings:
+ host: <account>.blob.core.windows.net
+ authentication:
+ method: AccessToken
+ accessTokenSettings:
+ secretRef: my-sas
+ ```
+
+## Configure dataflow destination
+
+Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's destination settings. The following example is a dataflow configuration that uses the MQTT endpoint for the source and Azure Data Lake Storage Gen2 as the destination. The source data is from the MQTT topics `thermostats/+/telemetry/temperature/#` and `humidifiers/+/telemetry/humidity/#`. The destination sends the data to Azure Data Lake Storage table `telemetryTable`.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: my-dataflow
+ namespace: azure-iot-operations
+spec:
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - humidifiers/+/telemetry/humidity/#
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: adls
+ dataDestination: telemetryTable
+```
+
+For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
+
+> [!NOTE]
+> Using the ADLSv2 endpoint as a source in a dataflow isn't supported. You can use the endpoint as a destination only.
+
+To customize the endpoint settings, see the following sections for more information.
+
+### Available authentication methods
+
+The following authentication methods are available for Azure Data Lake Storage Gen2 endpoints.
+
+For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+
+#### System-assigned managed identity
+
+Using the system-assigned managed identity is the recommended authentication method for Azure IoT Operations. Azure IoT Operations creates the managed identity automatically and assigns it to the Azure Arc-enabled Kubernetes cluster. It eliminates the need for secret management and allows for seamless authentication with the Azure Data Lake Storage Gen2 account.
+
+Before creating the dataflow endpoint, assign a role to the managed identity that has write permission to the storage account. For example, you can assign the *Storage Blob Data Contributor* role. To learn more about assigning roles to blobs, see [Authorize access to blobs using Microsoft Entra ID](../../storage/blobs/authorize-access-azure-active-directory.md).
+
+In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. Not specifying an audience creates a managed identity with the default audience scoped to your storage account.
+
+```yaml
+datalakeStorageSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+```
+
+If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+
+```yaml
+datalakeStorageSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: https://<account>.blob.core.windows.net
+```
+
+#### Access token
+
+Using an access token is an alternative authentication method. This method requires you to create a Kubernetes secret with the SAS token and reference the secret in the *DataflowEndpoint* resource.
+
+Get a [SAS token](../../storage/common/storage-sas-overview.md) for an Azure Data Lake Storage Gen2 (ADLSv2) account. For example, use the Azure portal to browse to your storage account. On the left menu, choose **Security + networking** > **Shared access signature**. Use the following table to set the required permissions.
+
+| Parameter | Enabled setting |
+| - | |
+| Allowed services | Blob |
+| Allowed resource types | Object, Container |
+| Allowed permissions | Read, Write, Delete, List, Create |
+
+To enhance security and follow the principle of least privilege, you can generate a SAS token for a specific container. To prevent authentication errors, ensure that the container specified in the SAS token matches the dataflow destination setting in the configuration.
+
+Create a Kubernetes secret with the SAS token. Don't include the question mark `?` that might be at the beginning of the token.
+
+```bash
+kubectl create secret generic my-sas \
+--from-literal=accessToken='sv=2022-11-02&ss=b&srt=c&sp=rwdlax&se=2023-07-22T05:47:40Z&st=2023-07-21T21:47:40Z&spr=https&sig=<signature>' \
+-n azure-iot-operations
+```
+
+Create the *DataflowEndpoint* resource with the secret reference.
+
+```yaml
+datalakeStorageSettings:
+ authentication:
+ method: AccessToken
+ accessTokenSettings:
+ secretRef: my-sas
+```
+
+#### User-assigned managed identity
+
+To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
+
+```yaml
+datalakeStorageSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <ID>
+ tenantId: <ID>
+```
+
+## Advanced settings
+
+You can set advanced settings for the Azure Data Lake Storage Gen2 endpoint, such as the batching latency and message count.
+
+Use the `batching` settings to configure the maximum number of messages and the maximum latency before the messages are sent to the destination. This setting is useful when you want to optimize for network bandwidth and reduce the number of requests to the destination.
+
+| Field | Description | Required |
+| -- | -- | -- |
+| `latencySeconds` | The maximum number of seconds to wait before sending the messages to the destination. The default value is 60 seconds. | No |
+| `maxMessages` | The maximum number of messages to send to the destination. The default value is 100000 messages. | No |
+
+For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
+
+Set the values in the dataflow endpoint custom resource.
+
+```yaml
+datalakeStorageSettings:
+ batching:
+ latencySeconds: 100
+ maxMessages: 1000
+```
iot-operations Howto Configure Adx Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-adx-endpoint.md
+
+ Title: Configure dataflow endpoints for Azure Data Explorer
+description: Learn how to configure dataflow endpoints for Azure Data Explorer in Azure IoT Operations.
++++ Last updated : 09/20/2024
+ai-usage: ai-assisted
+
+#CustomerIntent: As an operator, I want to understand how to configure dataflow endpoints for Azure Data Explorer in Azure IoT Operations so that I can send data to Azure Data Explorer.
++
+# Configure dataflow endpoints for Azure Data Explorer
++
+To send data to Azure Data Explorer in Azure IoT Operations Preview, you can configure a dataflow endpoint. This configuration allows you to specify the destination endpoint, authentication method, table, and other settings.
+
+## Prerequisites
+
+- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md)
+- A [configured dataflow profile](howto-configure-dataflow-profile.md)
+- An **Azure Data Explorer cluster**. Follow the **Full cluster** steps in the [Quickstart: Create an Azure Data Explorer cluster and database](/azure/data-explorer/create-cluster-and-database). The *free cluster* option doesn't work for this scenario.
+
+## Create an Azure Data Explorer database
+
+1. In the Azure portal, create a database in your Azure Data Explorer *full* cluster.
+
+1. Create a table in your database for the data. You can use the Azure portal and create columns manually, or you can use [KQL](/azure/data-explorer/kusto/management/create-table-command) in the query tab. For example, to create a table for sample thermostat data, run the following command:
+
+ ```kql
+ .create table thermostat (
+ externalAssetId: string,
+ assetName: string,
+ CurrentTemperature: real,
+ Pressure: real,
+ MqttTopic: string,
+ Timestamp: datetime
+ )
+ ```
+
+1. Enable streaming ingestion on your table and database. In the query tab, run the following command, substituting `<DATABASE_NAME>` with your database name:
+
+ ```kql
+ .alter database ['<DATABASE_NAME>'] policy streamingingestion enable
+ ```
+
+ Alternatively, you can enable streaming ingestion on the entire cluster. See [Enable streaming ingestion on an existing cluster](/azure/data-explorer/ingest-data-streaming#enable-streaming-ingestion-on-an-existing-cluster).
+
+1. In Azure portal, go to the Arc-connected Kubernetes cluster and select **Settings** > **Extensions**. In the extension list, find the name of your Azure IoT Operations extension. Copy the name of the extension.
+
+1. In your Azure Data Explorer database, under **Security + networking** select **Permissions** > **Add** > **Ingestor**. Search for the Azure IoT Operations extension name then add it.
+
+## Create an Azure Data Explorer dataflow endpoint
+
+Create the dataflow endpoint resource with your cluster and database information. We suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: adx
+ namespace: azure-iot-operations
+spec:
+ endpointType: DataExplorer
+ dataExplorerSettings:
+ host: <cluster>.<region>.kusto.windows.net
+ database: <database-name>
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+```
+
+## Configure dataflow destination
+
+Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's destination settings.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: my-dataflow
+ namespace: azure-iot-operations
+spec:
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - humidifiers/+/telemetry/humidity/#
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: adx
+ dataDestination: database-name
+```
+
+For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
+
+> [!NOTE]
+> Using the Azure Data Explorer endpoint as a source in a dataflow isn't supported. You can use the endpoint as a destination only.
+
+To customize the endpoint settings, see the following sections for more information.
+
+### Available authentication methods
+
+The following authentication methods are available for Azure Data Explorer endpoints. For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+
+#### System-assigned managed identity
+
+Using the system-assigned managed identity is the recommended authentication method for Azure IoT Operations. Azure IoT Operations creates the managed identity automatically and assigns it to the Azure Arc-enabled Kubernetes cluster. It eliminates the need for secret management and allows for seamless authentication with Azure Data Explorer.
+
+Before you create the dataflow endpoint, assign a role to the managed identity that grants permission to write to the Azure Data Explorer database. For more information on adding permissions, see [Manage Azure Data Explorer cluster permissions](/azure/data-explorer/manage-cluster-permissions).
+
+In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. This configuration creates a managed identity with the default audience `https://api.kusto.windows.net`.
+
+```yaml
+dataExplorerSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+```
+
+If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+
+```yaml
+dataExplorerSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: https://<audience URL>
+```
+
+#### User-assigned managed identity
+
+To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
+
+```yaml
+dataExplorerSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <ID>
+ tenantId: <ID>
+```
+
+## Advanced settings
+
+You can set advanced settings for the Azure Data Explorer endpoint, such as the batching latency and message count.
+
+Use the `batching` settings to configure the maximum number of messages and the maximum latency before the messages are sent to the destination. This setting is useful when you want to optimize for network bandwidth and reduce the number of requests to the destination.
+
+| Field | Description | Required |
+| -- | -- | -- |
+| `latencySeconds` | The maximum number of seconds to wait before sending the messages to the destination. The default value is 60 seconds. | No |
+| `maxMessages` | The maximum number of messages to send to the destination. The default value is 100000 messages. | No |
+
+For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
+
+Set the values in the dataflow endpoint custom resource.
+
+```yaml
+dataExplorerSettings:
+ batching:
+ latencySeconds: 100
+ maxMessages: 1000
+```
iot-operations Howto Configure Dataflow Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md
Previously updated : 08/03/2024 Last updated : 09/17/2024 #CustomerIntent: As an operator, I want to understand how to configure source and destination endpoints so that I can create a dataflow.
Last updated 08/03/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-To get started with dataflows, you need to configure endpoints. An endpoint is the connection point for the dataflow. You can use an endpoint as a source or destination for the dataflow. Some endpoint types can be used as [both sources and destinations](#endpoint-types-for-use-as-sources-and-destinations), while others are for [destinations only](#endpoint-type-for-destinations-only). A dataflow needs at least one source endpoint and one destination endpoint.
+To get started with dataflows, first create dataflow endpoints. A dataflow endpoint is the connection point for the dataflow. You can use an endpoint as a source or destination for the dataflow. Some endpoint types can be used as both sources and destinations, while others are for destinations only. A dataflow needs at least one source endpoint and one destination endpoint.
-The following example shows a custom resource definition with all of the configuration options. The required fields are dependent on the endpoint type. Review the sections for each endpoint type for configuration guidance.
+## Get started
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: <endpoint-name>
-spec:
- endpointType: <endpointType> # mqtt, kafka, or localStorage
- authentication:
- method: <method> # systemAssignedManagedIdentity, x509Credentials, userAssignedManagedIdentity, or serviceAccountToken
- systemAssignedManagedIdentitySettings: # Required if method is systemAssignedManagedIdentity
- audience: https://eventgrid.azure.net
- ### OR
- # x509CredentialsSettings: # Required if method is x509Credentials
- # certificateSecretName: x509-certificate
- ### OR
- # userAssignedManagedIdentitySettings: # Required if method is userAssignedManagedIdentity
- # clientId: <id>
- # tenantId: <id>
- # audience: https://eventgrid.azure.net
- ### OR
- # serviceAccountTokenSettings: # Required if method is serviceAccountToken
- # audience: my-audience
- mqttSettings: # Required if endpoint type is mqtt
- host: example.westeurope-1.ts.eventgrid.azure.net:8883
- tls: # Omit for no TLS or MQTT.
- mode: <mode> # enabled or disabled
- trustedCaCertificateConfigMap: ca-certificates
- sharedSubscription:
- groupMinimumShareNumber: 3 # Required if shared subscription is enabled.
- groupName: group1 # Required if shared subscription is enabled.
- clientIdPrefix: <prefix>
- retain: keep
- sessionExpirySeconds: 3600
- qos: 1
- protocol: mqtt
- maxInflightMessages: 100
-```
+To get started, use the following table to choose the endpoint type to configure:
-| Name | Description |
-|-|--|
-| `endpointType` | Type of the endpoint. Values: `mqtt`, `kafka`, `dataExplorer`, `dataLakeStorage`, `fabricOneLake`, or `localStorage`. |
-| `authentication.method` | Method of authentication. Values: `systemAssignedManagedIdentity`, `x509Credentials`, `userAssignedManagedIdentity`, or `serviceAccountToken`. |
-| `authentication.systemAssignedManagedIdentitySettings.audience` | Audience of the service to authenticate against. Defaults to `https://eventgrid.azure.net`. |
-| `authentication.x509CredentialsSettings.certificateSecretName` | Secret name of the X.509 certificate. |
-| `authentication.userAssignedManagedIdentitySettings.clientId` | Client ID for the user-assigned managed identity. |
-| `authentication.userAssignedManagedIdentitySettings.tenantId` | Tenant ID. |
-| `authentication.userAssignedManagedIdentitySettings.audience` | Audience of the service to authenticate against. Defaults to `https://eventgrid.azure.net`. |
-| `authentication.serviceAccountTokenSettings.audience` | Audience of the service account. Optional, defaults to the broker internal service account audience. |
-| `mqttSettings.host` | Host of the MQTT broker in the form of \<hostname\>:\<port\>. Connects to MQTT broker if omitted.|
-| `mqttSettings.tls` | TLS configuration. Omit for no TLS or MQTT broker. |
-| `mqttSettings.tls.mode` | Enable or disable TLS. Values: `enabled` or `disabled`. Defaults to `disabled`. |
-| `mqttSettings.tls.trustedCaCertificateConfigMap` | Trusted certificate authority (CA) certificate config map. No CA certificate if omitted. No CA certificate works for public endpoints like Azure Event Grid.|
-| `mqttSettings.sharedSubscription` | Shared subscription settings. No shared subscription if omitted. |
-| `mqttSettings.sharedSubscription.groupMinimumShareNumber` | Number of clients to use for shared subscription. |
-| `mqttSettings.sharedSubscription.groupName` | Shared subscription group name. |
-| `mqttSettings.clientIdPrefix` | Client ID prefix. Client ID generated by the dataflow is \<prefix\>-id. No prefix if omitted.|
-| `mqttSettings.retain` | Whether or not to keep the retain setting. Values: `keep` or `never`. Defaults to `keep`. |
-| `mqttSettings.sessionExpirySeconds` | Session expiry in seconds. Defaults to `3600`.|
-| `mqttSettings.qos` | Quality of service. Values: `0` or `1`. Defaults to `1`.|
-| `mqttSettings.protocol` | Use MQTT or web sockets. Values: `mqtt` or `websockets`. Defaults to `mqtt`.|
-| `mqttSettings.maxInflightMessages` | The maximum number of messages to keep in flight. For subscribe, it's the receive maximum. For publish, it's the maximum number of messages to send before waiting for an acknowledgment. Default is `100`. |
+| Endpoint type | Description | Can be used as a source | Can be used as a destination |
+||-|-||
+| [MQTT](howto-configure-mqtt-endpoint.md) | For bi-directional messaging with MQTT brokers, including the one built-in to Azure IoT Operations and Event Grid. | Yes | Yes |
+| [Kafka](howto-configure-kafka-endpoint.md) | For bi-directional messaging with Kafka brokers, including Azure Event Hubs. | Yes | Yes |
+| [Data Lake](howto-configure-adlsv2-endpoint.md) | For uploading data to Azure Data Lake Gen2 storage accounts. | No | Yes |
+| [Microsoft Fabric OneLake](howto-configure-fabric-endpoint.md) | For uploading data to Microsoft Fabric OneLake lakehouses. | No | Yes |
+| [Local storage](howto-configure-local-storage-endpoint.md) | For sending data to a locally available persistent volume, through which you can upload data via Edge Storage Accelerator edge volumes. | No | Yes |
-## Endpoint types for use as sources and destinations
+## Reuse endpoints
-The following endpoint types are used as sources and destinations.
+Think of each dataflow endpoint as a bundle of configuration settings that contains where the data should come from or go to (the `host` value), how to authenticate with the endpoint, and other settings like TLS configuration or batching preference. So you just need to create it once and then you can reuse it in multiple dataflows where these settings would be the same.
-### MQTT
+To make it easier to reuse endpoints, the MQTT or Kafka topic filter isn't part of the endpoint configuration. Instead, you specify the topic filter in the dataflow configuration. This means you can use the same endpoint for multiple dataflows that use different topic filters.
-MQTT endpoints are used for MQTT sources and destinations. You can configure the endpoint, Transport Layer Security (TLS), authentication, and other settings.
+For example, you can use the default MQTT broker dataflow endpoint. You can use it for both the source and destination with different topic filters:
-#### MQTT broker
+# [Portal](#tab/portal)
-To configure an MQTT broker endpoint with default settings, you can omit the host field, along with other optional fields. This configuration allows you to connect to the default MQTT broker without any extra configuration in a durable way, no matter how the broker changes.
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: mq
-spec:
- endpointType: mqtt
- authentication:
- method: serviceAccountToken
- serviceAccountTokenSettings:
- audience: aio-mq
- mqttSettings:
- {}
-```
-
-#### Event Grid
-
-To configure an Azure Event Grid MQTT broker endpoint, use managed identity for authentication.
+# [Kubernetes](#tab/kubernetes)
```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
+kind: Dataflow
metadata:
- name: eventgrid
-spec:
- endpointType: mqtt
- authentication:
- method: systemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- audience: "https://eventgrid.azure.net"
- mqttSettings:
- host: example.westeurope-1.ts.eventgrid.azure.net:8883
- tls:
- mode: Enabled
-```
-
-#### Other MQTT brokers
-
-For other MQTT brokers, you can configure the endpoint, TLS, authentication, and other settings as needed.
-
-```yaml
-spec:
- endpointType: mqtt
- authentication:
- ...
- mqttSettings:
- host: example.mqttbroker.com:8883
- tls:
- mode: Enabled
- trustedCaCertificateConfigMap: <your CA certificate config map>
-```
-
-Under `authentication`, you can configure the authentication method for the MQTT broker. Supported methods include X.509:
-
-```yaml
-authentication:
- method: x509Credentials
- x509CredentialsSettings:
- certificateSecretName: <your x509 secret name>
-```
-
-> [!IMPORTANT]
-> When you use X.509 authentication with an Event Grid MQTT broker, go to the Event Grid namespace > **Configuration** and check these settings:
->
-> - **Enable MQTT**: Select the checkbox.
-> - **Enable alternative client authentication name sources**: Select the checkbox.
-> - **Certificate Subject Name**: Select this option in the dropdown list.
-> - **Maximum client sessions per authentication name**: Set to **3** or more.
->
-> The alternative client authentication and maximum client sessions options allow dataflows to use client certificate subject name for authentication instead of `MQTT CONNECT Username`. This capability is important so that dataflows can spawn multiple instances and still be able to connect. To learn more, see [Event Grid MQTT client certificate authentication](../../event-grid/mqtt-client-certificate-authentication.md) and [Multi-session support](../../event-grid/mqtt-establishing-multiple-sessions-per-client.md).
-
-System-assigned managed identity:
-
-```yaml
-authentication:
- method: systemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- # Audience of the service to authenticate against
- # Optional; defaults to the audience for Event Grid MQTT Broker
- audience: https://eventgrid.azure.net
-```
-
-User-assigned managed identity:
-
-```yaml
-authentication:
- method: userAssignedManagedIdentity
- userAssignedManagedIdentitySettings:
- clientId: <id>
- tenantId: <id>
-```
-
-Kubernetes SAT:
-
-```yaml
-authentication:
- method: serviceAccountToken
- serviceAccountTokenSettings:
- audience: <your service account audience>
-```
-
-You can also configure shared subscriptions, QoS, MQTT version, client ID prefix, keep alive, clean session, session expiry, retain, and other settings.
-
-```yaml
+ name: mq-to-mq
+ namespace: azure-iot-operations
spec:
- endpointType: mqtt
- mqttSettings:
- sharedSubscription:
- groupMinimumShareNumber: 3
- groupName: group1
- qos: 1
- mqttVersion: v5
- clientIdPrefix: dataflow
- keepRetain: enabled
+ profileRef: default
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ - example/topic/1
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: mq
+ dataDestination: example/topic/2
```
-### Kafka
-
-Kafka endpoints are used for Kafka sources and destinations. You can configure the endpoint, TLS, authentication, and other settings.
+
-#### Azure Event Hubs
+Similarly, you can create multiple dataflows that use the same MQTT endpoint for other endpoints and topics. For example, you can use the same MQTT endpoint for a dataflow that sends data to a Kafka endpoint.
-To configure an Azure Event Hubs Kafka, we recommend that you use managed identity for authentication.
+# [Portal](#tab/portal)
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: kafka
-spec:
- endpointType: kafka
- authentication:
- method: systemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- audience: <your Event Hubs namespace>.servicebus.windows.net
- kafkaSettings:
- host: <your Event Hubs namespace>.servicebus.windows.net:9093
- tls:
- mode: Enabled
- consumerGroupId: mqConnector
-```
-#### Other Kafka brokers
-
-For example, to configure a Kafka endpoint, set the host, TLS, authentication, and other settings as needed.
+# [Kubernetes](#tab/kubernetes)
```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
+kind: Dataflow
metadata:
- name: kafka
+ name: mq-to-kafka
+ namespace: azure-iot-operations
spec:
- endpointType: kafka
- authentication:
- ...
- kafkaSettings:
- host: example.kafka.com:9093
- tls:
- mode: Enabled
- consumerGroupId: mqConnector
-```
-
-Under `authentication`, you can configure the authentication method for the Kafka broker. Supported methods include SASL, X.509, system-assigned managed identity, and user-assigned managed identity.
-
-```yaml
-authentication:
- method: sasl
- saslSettings:
- saslType: PLAIN
- tokenSecretName: <your token secret name>
- # OR
- method: x509Credentials
- x509CredentialsSettings:
- certificateSecretName: <your x509 secret name>
- # OR
- method: systemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- audience: https://<your Event Hubs namespace>.servicebus.windows.net
- # OR
- method: userAssignedManagedIdentity
- userAssignedManagedIdentitySettings:
- clientId: <id>
- tenantId: <id>
+ profileRef: default
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ - example/topic/3
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: example-kafka-endpoint
+ dataDestination: example/topic/4
```
-### Configure settings specific to source endpoints
-
-For Kafka endpoints, you can configure settings specific for using the endpoint as a source. These settings have no effect if the endpoint is used as a destination.
-
-```yaml
-spec:
- endpointType: kafka
- kafkaSettings:
- consumerGroupId: fromMq
-```
-
-### Configure settings specific to destination endpoints
-
-For Kafka endpoints, you can configure settings specific for using the endpoint as a destination. These settings have no effect if the endpoint is used as a source.
-
-```yaml
-spec:
- endpointType: kafka
- kafkaSettings:
- compression: gzip
- batching:
- latencyMs: 100
- maxBytes: 1000000
- maxMessages: 1000
- partitionStrategy: static
- kafkaAcks: all
- copyMqttProperties: enabled
-```
-
-> [!IMPORTANT]
-> By default, data flows don't send MQTT message user properties to Kafka destinations. These user properties include values such as `subject` that stores the name of the asset sending the message. To include user properties in the Kafka message, you must update the `DataflowEndpoint` configuration to include `copyMqttProperties: enabled`.
-
-## Endpoint type for destinations only
-
-The following endpoint type is used for destinations only.
-
-### Local storage and Edge Storage Accelerator
-
-Use the local storage option to send data to a locally available persistent volume, through which you can upload data via Edge Storage Accelerator edge volumes. In this case, the format must be Parquet.
+
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: esa
-spec:
- endpointType: localStorage
- localStorageSettings:
- persistentVolumeClaimRef: <your PVC name>
-```
+Similar to the MQTT example, you can create multiple dataflows that use the same Kafka endpoint for different topics, or the same Data Lake endpoint for different tables.
iot-operations Howto Configure Dataflow Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-profile.md
Previously updated : 08/03/2024 Last updated : 08/29/2024 #CustomerIntent: As an operator, I want to understand how to I can configure a a dataflow profile to control a dataflow behavior.
Last updated 08/03/2024
By default, when you deploy Azure IoT Operations, a dataflow profile is created with default settings. You can configure the dataflow profile to suit your needs.
+<!-- TODO: link to reference docs -->
+
+## Default dataflow profile
+
+By default, a dataflow profile named "default" is created when Azure IoT Operations is deployed.
+ ```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1 kind: DataflowProfile metadata:
- name: my-dataflow-profile
+ name: default
+ namespace: azure-iot-operations
spec: instanceCount: 1
- tolerations:
- ...
- diagnostics:
- logFormat: text
- logLevel: info
- metrics:
- mode: enabled
- cacheTimeoutSeconds: 600
- exportIntervalSeconds: 10
- prometheusPort: 9600
- updateIntervalSeconds: 30
- traces:
- mode: enabled
- cacheSizeMegabytes: 16
- exportIntervalSeconds: 10
- openTelemetryCollectorAddress: null
- selfTracing:
- mode: enabled
- frequencySeconds: 30
- spanChannelCapacity: 100
```
-| Field Name | Description |
-|-|--|
-| `instanceCount` | Number of instances to spread the dataflow across. Optional; automatically determined if not set. Currently in the preview release, set the value to `1`. |
-| `tolerations` | Node tolerations. Optional; see [Kubernetes Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/). |
-| `diagnostics` | Diagnostics settings. |
-| `diagnostics.logFormat` | Format of the logs. For example, `text`. |
-| `diagnostics.logLevel` | Log level. For example, `info`, `debug`, `error`. Optional; defaults to `info`. |
-| `diagnostics.metrics` | Metrics settings. |
-| `diagnostics.metrics.mode` | Mode for metrics. For example, `enabled`. |
-| `diagnostics.metrics.cacheTimeoutSeconds` | Cache timeout for metrics in seconds. |
-| `diagnostics.metrics.exportIntervalSeconds` | Export interval for metrics in seconds. |
-| `diagnostics.metrics.prometheusPort` | Port for Prometheus metrics. |
-| `diagnostics.metrics.updateIntervalSeconds` | Update interval for metrics in seconds. |
-| `diagnostics.traces` | Traces settings. |
-| `diagnostics.traces.mode` | Mode for traces. For example, `enabled`. |
-| `diagnostics.traces.cacheSizeMegabytes` | Cache size for traces in megabytes. |
-| `diagnostics.traces.exportIntervalSeconds` | Export interval for traces in seconds. |
-| `diagnostics.traces.openTelemetryCollectorAddress` | Address for the OpenTelemetry collector. |
-| `diagnostics.traces.selfTracing` | Self-tracing settings. |
-| `diagnostics.traces.selfTracing.mode` | Mode for self-tracing. For example, `enabled`. |
-| `diagnostics.traces.selfTracing.frequencySeconds`| Frequency for self-tracing in seconds. |
-| `diagnostics.traces.spanChannelCapacity` | Capacity of the span channel. |
-
-## Default settings
-
-The default settings for a dataflow profile are:
-
-* Instances: (null)
-* Log level: Info
-* Node tolerations: None
-* Diagnostic settings: None
+In most cases, you don't need to change the default settings. However, you can create additional dataflow profiles and configure them as needed.
## Scaling
To manually scale the dataflow profile, specify the maximum number of instances
```yaml spec:
- maxInstances: 3
+ instanceCount: 3
``` If not specified, Azure IoT Operations automatically scales the dataflow profile based on the dataflow configuration. The number of instances is determined by the number of dataflows and the shared subscription configuration.
+> [!IMPORTANT]
+> Currently in public preview, adjusting the instance count may result in message loss. At this time, it's recommended to not adjust the instance count.
+ ## Configure log level, node tolerations, diagnostic settings, and other deployment-wide settings You can configure other deployment-wide settings such as log level, node tolerations, and diagnostic settings.
iot-operations Howto Configure Fabric Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-fabric-endpoint.md
+
+ Title: Configure dataflow endpoints for Microsoft Fabric OneLake
+description: Learn how to configure dataflow endpoints for Microsoft Fabric OneLake in Azure IoT Operations.
++++ Last updated : 10/02/2024
+ai-usage: ai-assisted
+
+#CustomerIntent: As an operator, I want to understand how to configure dataflow endpoints for Microsoft Fabric OneLake in Azure IoT Operations so that I can send data to Microsoft Fabric OneLake.
++
+# Configure dataflow endpoints for Microsoft Fabric OneLake
++
+To send data to Microsoft Fabric OneLake in Azure IoT Operations Preview, you can configure a dataflow endpoint. This configuration allows you to specify the destination endpoint, authentication method, table, and other settings.
+
+## Prerequisites
+
+- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md)
+- A [configured dataflow profile](howto-configure-dataflow-profile.md)
+- **Microsoft Fabric OneLake**. See the following steps to create a workspace and lakehouse.
+ - [Create a workspace](/fabric/get-started/create-workspaces). The default *my workspace* isn't supported.
+ - [Create a lakehouse](/fabric/onelake/create-lakehouse-onelake).
+
+## Create a Microsoft Fabric OneLake dataflow endpoint
+
+To configure a dataflow endpoint for Microsoft Fabric OneLake, we suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management.
+
+# [Kubernetes](#tab/kubernetes)
+
+1. Get the managed identity of the Azure IoT Operations Preview Arc extension.
+
+1. In the Microsoft Fabric workspace you created, select **Manage access** > **+ Add people or groups**.
+
+1. Search for the Azure IoT Operations Preview Arc extension by its name, and select the app ID GUID value that you found in the previous step.
+
+1. Select **Contributor** as the role, then select **Add**.
+
+1. Create the *DataflowEndpoint* resource and specify the managed identity authentication method.
+
+ ```yaml
+ apiVersion: connectivity.iotoperations.azure.com/v1beta1
+ kind: DataflowEndpoint
+ metadata:
+ name: fabric
+ spec:
+ endpointType: FabricOneLake
+ fabricOneLakeSettings:
+ # The default Fabric OneLake host URL in most cases
+ host: https://onelake.dfs.fabric.microsoft.com
+ oneLakePathType: Tables
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+ names:
+ workspaceName: <EXAMPLE-WORKSPACE-NAME>
+ lakehouseName: <EXAMPLE-LAKEHOUSE-NAME>
+ ```
+
+# [Bicep](#tab/bicep)
+
+This Bicep template file from [Bicep File for Microsoft Fabric OneLake dataflow Tutorial](https://gist.github.com/david-emakenemi/289a167c8fa393d3a7dce274a6eb21eb) deploys the necessary resources for dataflows to Fabric OneLake.
+
+1. Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `schemaRegistryName`, `opcuaSchemaName`, and `persistentVCName`.
+
+1. Next, deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
+
+```azurecli
+az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/<filename>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
+```
+
+This endpoint is the destination for the dataflow that receives messages to Fabric OneLake.
+
+```bicep
+resource oneLakeEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: 'onelake-ep'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'FabricOneLake'
+ fabricOneLakeSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+ oneLakePathType: 'Tables'
+ host: 'https://msit-onelake.dfs.fabric.microsoft.com'
+ names: {
+ lakehouseName: '<EXAMPLE-LAKEHOUSE-NAME>'
+ workspaceName: '<EXAMPLE-WORKSPACE-NAME>'
+ }
+ batching: {
+ latencySeconds: 5
+ maxMessages: 10000
+ }
+ }
+ }
+}
+```
++
+## Configure dataflow destination
+
+Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's destination settings.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: my-dataflow
+ namespace: azure-iot-operations
+spec:
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ *
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: fabric
+```
+
+To customize the endpoint settings, see the following sections for more information.
+
+### Fabric OneLake host URL
+
+Use the `host` setting to specify the Fabric OneLake host URL. Usually, it's `https://onelake.dfs.fabric.microsoft.com`.
+
+```yaml
+fabricOneLakeSettings:
+ host: https://onelake.dfs.fabric.microsoft.com
+```
+
+However, if this host value doesn't work and you're not getting data, try checking for the URL from the Properties of one of the precreated lakehouse folders.
+
+![Screenshot of properties shortcut menu to get lakehouse URL.](media/howto-configure-fabric-endpoint/lakehouse-name.png)
+
+The host value should look like `https://xyz.dfs.fabric.microsoft.com`.
+
+To learn more, see [Connecting to Microsoft OneLake](/fabric/onelake/onelake-access-api).
+
+### OneLake path type
+
+Use the `oneLakePathType` setting to specify the type of path in the Fabric OneLake. The default value is `Tables`, which is used for the Tables folder in the lakehouse typically in Delta Parquet format.
+
+```yaml
+fabricOneLakeSettings:
+ oneLakePathType: Tables
+```
+
+Another possible value is `Files`. Use this value for the Files folder in the lakehouse, which is unstructured and can be in any format.
+
+```yaml
+fabricOneLakeSettings:
+ oneLakePathType: Files
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource dataflow_onelake 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@2024-08-15-preview' = {
+ parent: defaultDataflowProfile
+ name: 'dataflow-onelake3'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ mode: 'Enabled'
+ operations: [
+ {
+ operationType: 'Source'
+ sourceSettings: {
+ endpointRef: defaultDataflowEndpoint.name
+ dataSources: array('azure-iot-operations/data/thermostat')
+ }
+ }
+ {
+ operationType: 'BuiltInTransformation'
+ builtInTransformationSettings: {
+ map: [
+ {
+ inputs: array('*')
+ output: '*'
+ }
+ ]
+ schemaRef: 'aio-sr://${opcuaSchemaName}:${opcuaSchemaVer}'
+ serializationFormat: 'Delta' // Can also be 'Parquet'
+ }
+ }
+ {
+ operationType: 'Destination'
+ destinationSettings: {
+ endpointRef: oneLakeEndpoint.name
+ dataDestination: 'opc'
+ }
+ }
+ ]
+ }
+}
+```
+
+The `BuiltInTransformation` in this Bicep file transforms the data flowing through the dataflow pipeline. It applies a pass-through operation, mapping all input fields `(inputs: array('*'))` directly to the output `(output: '*')`, without altering the data.
+
+It also references the defined OPC-UA schema to ensure the data is structured according to the OPC UA protocol. The transformation then serializes the data in Delta format (or Parquet if specified).
+
+This step ensures that the data adheres to the required schema and format before being sent to the destination.
+++
+For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
+
+> [!NOTE]
+> Using the Fabric OneLake dataflow endpoint as a source in a dataflow isn't supported. You can use the endpoint as a destination only.
+
+### Available authentication methods
+
+The following authentication methods are available for Microsoft Fabric OneLake dataflow endpoints. For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+
+#### System-assigned managed identity
+
+Using the system-assigned managed identity is the recommended authentication method for Azure IoT Operations. Azure IoT Operations creates the managed identity automatically and assigns it to the Azure Arc-enabled Kubernetes cluster. It eliminates the need for secret management and allows for seamless authentication with Azure Data Explorer.
+
+Before you create the dataflow endpoint, assign a role to the managed identity that grants permission to write to the Fabric lakehouse. To learn more, see [Give access to a workspace](/fabric/get-started/give-access-workspaces).
+
+# [Kubernetes](#tab/kubernetes)
+
+In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. This configuration creates a managed identity with the default audience.
+
+```yaml
+fabricOneLakeSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ {}
+```
+
+If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+
+```yaml
+fabricOneLakeSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: https://contoso.onelake.dfs.fabric.microsoft.com
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+fabricOneLakeSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {
+ audience: 'https://contoso.onelake.dfs.fabric.microsoft.com'
+ }
+ }
+ ...
+ }
+```
+++
+#### User-assigned managed identity
+
+# [Kubernetes](#tab/kubernetes)
+
+To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
+
+```yaml
+fabricOneLakeSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <ID>
+ tenantId: <ID>
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+fabricOneLakeSettings: {
+ authentication: {
+ method: 'UserAssignedManagedIdentity'
+ userAssignedManagedIdentitySettings: {
+ clientId: '<clientId>'
+ tenantId: '<tenantId>'
+ }
+ }
+ ...
+ }
+```
+++
+## Advanced settings
+
+You can set advanced settings for the Fabric OneLake endpoint, such as the batching latency and message count. You can set these settings in the dataflow endpoint **Advanced** portal tab or within the dataflow endpoint custom resource.
+
+Use the `batching` settings to configure the maximum number of messages and the maximum latency before the messages are sent to the destination. This setting is useful when you want to optimize for network bandwidth and reduce the number of requests to the destination.
+
+| Field | Description | Required |
+| -- | -- | -- |
+| `latencySeconds` | The maximum number of seconds to wait before sending the messages to the destination. The default value is 60 seconds. | No |
+| `maxMessages` | The maximum number of messages to send to the destination. The default value is 100000 messages. | No |
+
+For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
+
+# [Kubernetes](#tab/kubernetes)
+
+Set the values in the dataflow endpoint custom resource.
+
+```yaml
+fabricOneLakeSettings:
+ batching:
+ latencySeconds: 100
+ maxMessages: 1000
+```
+
+# [Bicep](#tab/bicep)
+
+The bicep file has the values in the dataflow endpoint resource.
+
+<!-- TODO Add a way for users to override the file with values using the az stack group command >
+
+```bicep
+batching: {
+ latencySeconds: 5
+ maxMessages: 10000
+}
+```
+-->
+
iot-operations Howto Configure Kafka Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md
+
+ Title: Configure Azure Event Hubs and Kafka dataflow endpoints in Azure IoT Operations
+description: Learn how to configure dataflow endpoints for Kafka in Azure IoT Operations.
++++ Last updated : 10/02/2024
+ai-usage: ai-assisted
+
+#CustomerIntent: As an operator, I want to understand how to configure dataflow endpoints for Kafka in Azure IoT Operations so that I can send data to and from Kafka endpoints.
++
+# Configure Azure Event Hubs and Kafka dataflow endpoints
++
+To set up bi-directional communication between Azure IoT Operations Preview and Apache Kafka brokers, you can configure a dataflow endpoint. This configuration allows you to specify the endpoint, Transport Layer Security (TLS), authentication, and other settings.
+
+## Prerequisites
+
+- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md)
+- A [configured dataflow profile](howto-configure-dataflow-profile.md)
+
+## Create a Kafka dataflow endpoint
+
+To create a dataflow endpoint for Kafka, you need to specify the Kafka broker host, authentication method, TLS settings, and other settings. You can use the endpoint as a source or destination in a dataflow. When used with Azure Event Hubs, you can use managed identity for authentication that eliminates the need to manage secrets.
+
+### Azure Event Hubs
+
+[Azure Event Hubs is compatible with the Kafka protocol](../../event-hubs/azure-event-hubs-kafka-overview.md) and can be used with dataflows with some limitations.
+
+If you're using Azure Event Hubs, create an Azure Event Hubs namespace and a Kafka-enabled event hub for each Kafka topic.
+
+To configure a dataflow endpoint for a Kafka endpoint, we suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management.
+
+# [Portal](#tab/portal)
+
+1. In the operations experience portal, select the **Dataflow endpoints** tab.
+1. Under **Create new dataflow endpoint**, select **Azure Event Hubs** > **New**.
+
+ :::image type="content" source="media/howto-configure-kafka-endpoint/create-event-hubs-endpoint.png" alt-text="Screenshot using operations experience portal to create an Azure Event Hubs dataflow endpoint.":::
+
+1. Enter the following settings for the endpoint:
+
+ | Setting | Description |
+ | -- | - |
+ | Name | The name of the dataflow endpoint. |
+ | Host | The hostname of the Kafka broker in the format `<HOST>.servicebus.windows.net`. |
+ | Authentication method| The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, or *SASL*. |
+ | SASL type | The type of SASL authentication. Choose *Plain*, *ScramSha256*, or *ScramSha512*. Required if using *SASL*. |
+ | Synced secret name | The name of the secret. Required if using *SASL* or *X509*. |
+ | Username reference of token secret | The reference to the username in the SASL token secret. Required if using *SASL*. |
+
+# [Kubernetes](#tab/kubernetes)
+
+1. Get the managed identity of the Azure IoT Operations Arc extension.
+1. Assign the managed identity to the Event Hubs namespace with the `Azure Event Hubs Data Sender` or `Azure Event Hubs Data Receiver` role.
+1. Create the *DataflowEndpoint* resource and specify the managed identity authentication method.
+
+ ```yaml
+ apiVersion: connectivity.iotoperations.azure.com/v1beta1
+ kind: DataflowEndpoint
+ metadata:
+ name: eventhubs
+ namespace: azure-iot-operations
+ spec:
+ endpointType: Kafka
+ kafkaSettings:
+ host: <HOST>.servicebus.windows.net:9093
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+ tls:
+ mode: Enabled
+ consumerGroupId: mqConnector
+ ```
+
+The Kafka topic, or individual event hub, is configured later when you create the dataflow. The Kafka topic is the destination for the dataflow messages.
+
+#### Use connection string for authentication to Event Hubs
+
+To use connection string for authentication to Event Hubs, update the `authentication` section of the Kafka settings to use the `Sasl` method and configure the `saslSettings` with the `saslType` as `Plain` and the `secretRef` with the name of the secret that contains the connection string.
+
+```yaml
+spec:
+ kafkaSettings:
+ authentication:
+ method: Sasl
+ saslSettings:
+ saslType: Plain
+ secretRef: <YOUR-TOKEN-SECRET-NAME>
+ tls:
+ mode: Enabled
+```
+
+In the example, the `secretRef` is the name of the secret that contains the connection string. The secret must be in the same namespace as the Kafka dataflow resource. The secret must have both the username and password as key-value pairs. For example:
+
+```bash
+kubectl create secret generic cs-secret -n azure-iot-operations \
+ --from-literal=username='$ConnectionString' \
+ --from-literal=password='Endpoint=sb://<NAMESPACE>.servicebus.windows.net/;SharedAccessKeyName=<KEY-NAME>;SharedAccessKey=<KEY>'
+```
+> [!TIP]
+> Scoping the connection string to the namespace (as opposed to individual event hubs) allows a dataflow to send and receive messages from multiple different event hubs and Kafka topics.
+++
+#### Limitations
+
+Azure Event Hubs [doesn't support all the compression types that Kafka supports](../../event-hubs/azure-event-hubs-kafka-overview.md#compression). Only GZIP compression is supported. Using other compression types might result in errors.
+
+### Other Kafka brokers
+
+To configure a dataflow endpoint for non-Event-Hub Kafka brokers, set the host, TLS, authentication, and other settings as needed.
+
+# [Portal](#tab/portal)
+
+1. In the operations experience portal, select the **Dataflow endpoints** tab.
+1. Under **Create new dataflow endpoint**, select **Custom Kafka Broker** > **New**.
+
+ :::image type="content" source="media/howto-configure-kafka-endpoint/create-kafka-endpoint.png" alt-text="Screenshot using operations experience portal to create a Kafka dataflow endpoint.":::
+
+1. Enter the following settings for the endpoint:
+
+ | Setting | Description |
+ | -- | - |
+ | Name | The name of the dataflow endpoint. |
+ | Host | The hostname of the Kafka broker in the format `<HOST>.servicebus.windows.net`. |
+ | Authentication method| The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, *SASL*, or *X509 certificate*. |
+ | SASL type | The type of SASL authentication. Choose *Plain*, *ScramSha256*, or *ScramSha512*. Required if using *SASL*. |
+ | Synced secret name | The name of the secret. Required if using *SASL* or *X509*. |
+ | Username reference of token secret | The reference to the username in the SASL token secret. Required if using *SASL*. |
+ | X509 client certificate | The X.509 client certificate used for authentication. Required if using *X509*. |
+ | X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. Required if using *X509*. |
+ | X509 client key | The private key corresponding to the X.509 client certificate. Required if using *X509*. |
+
+1. Select **Apply** to provision the endpoint.
+
+> [!NOTE]
+> Currently, the operations experience portal doesn't support using a Kafka dataflow endpoint as a source. You can create a dataflow with a source Kafka dataflow endpoint using the Kubernetes or Bicep.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: kafka
+ namespace: azure-iot-operations
+spec:
+ endpointType: Kafka
+ kafkaSettings:
+ host: <KAFKA-HOST>:<PORT>
+ authentication:
+ method: Sasl
+ saslSettings:
+ saslType: ScramSha256
+ secretRef: <YOUR-TOKEN-SECRET-NAME>
+ tls:
+ mode: Enabled
+ consumerGroupId: mqConnector
+```
+++
+## Use the endpoint in a dataflow source or destination
+
+Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's source or destination settings.
+
+# [Portal](#tab/portal)
+
+1. In the Azure IoT Operations Preview portal, create a new dataflow or edit an existing dataflow by selecting the **Dataflows** tab. If creating a new dataflow, select **Create dataflow** and replace `<new-dataflow>` with a name for the dataflow.
+1. In the editor, select the source endpoint. Kafka endpoints can be used as both source and destination. Currently, you can only use the portal to create a dataflow with a Kafka endpoint as a destination. Use a Kubernetes custom resource or Bicep to create a dataflow with a Kafka endpoint as a source.
+1. Choose the Kafka dataflow endpoint that you created previously.
+1. Specify the Kafka topic where messages are sent.
+
+ :::image type="content" source="media/howto-configure-kafka-endpoint/dataflow-mq-kafka.png" alt-text="Screenshot using operations experience portal to create a dataflow with an MQTT source and Azure Event Hubs destination.":::
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: my-dataflow
+ namespace: azure-iot-operations
+spec:
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ *
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: kafka
+```
+++
+For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
+
+To customize the endpoint settings, see the following sections for more information.
+
+### Available authentication methods
+
+The following authentication methods are available for Kafka broker dataflow endpoints. For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+
+#### SASL
+
+# [Portal](#tab/portal)
+
+In the operations experience portal dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **SASL**.
+
+Enter the following settings for the endpoint:
+
+| Setting | Description |
+| | - |
+| SASL type | The type of SASL authentication to use. Supported types are `Plain`, `ScramSha256`, and `ScramSha512`. |
+| Synced secret name | The name of the Kubernetes secret that contains the SASL token. |
+| Username reference or token secret | The reference to the username or token secret used for SASL authentication. |
+| Password reference of token secret | The reference to the password or token secret used for SASL authentication. |
+
+# [Kubernetes](#tab/kubernetes)
+
+To use SASL for authentication, update the `authentication` section of the Kafka settings to use the `Sasl` method and configure the `saslSettings` with the `saslType` and the `secretRef` with the name of the secret that contains the SASL token.
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: Sasl
+ saslSettings:
+ saslType: Plain
+ secretRef: <YOUR-TOKEN-SECRET-NAME>
+```
+
+The supported SASL types are:
+
+- `Plain`
+- `ScramSha256`
+- `ScramSha512`
+
+The secret must be in the same namespace as the Kafka dataflow resource. The secret must have the SASL token as a key-value pair. For example:
+
+```bash
+kubectl create secret generic sasl-secret -n azure-iot-operations \
+ --from-literal=token='your-sasl-token'
+```
+++
+#### X.509
+
+# [Portal](#tab/portal)
+
+In the operations experience portal dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **X509 certificate**.
+
+Enter the following settings for the endpoint:
+
+| Setting | Description |
+| | - |
+| Synced secret name | The name of the secret. |
+| X509 client certificate | The X.509 client certificate used for authentication. |
+| X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. |
+| X509 client key | The private key corresponding to the X.509 client certificate. |
+
+# [Kubernetes](#tab/kubernetes)
+
+To use X.509 for authentication, update the `authentication` section of the Kafka settings to use the `X509Certificate` method and configure the `x509CertificateSettings` with the `secretRef` with the name of the secret that contains the X.509 certificate.
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: X509Certificate
+ x509CertificateSettings:
+ secretRef: <YOUR-TOKEN-SECRET-NAME>
+```
+
+The secret must be in the same namespace as the Kafka dataflow resource. Use Kubernetes TLS secret containing the public certificate and private key. For example:
+
+```bash
+kubectl create secret tls my-tls-secret -n azure-iot-operations \
+ --cert=path/to/cert/file \
+ --key=path/to/key/file
+```
+++
+#### System-assigned managed identity
+
+To use system-assigned managed identity for authentication, first assign a role to the Azure IoT Operation managed identity that grants permission to send and receive messages from Event Hubs, such as Azure Event Hubs Data Owner or Azure Event Hubs Data Sender/Receiver. To learn more, see [Authenticate an application with Microsoft Entra ID to access Event Hubs resources](../../event-hubs/authenticate-application.md#built-in-roles-for-azure-event-hubs).
+
+# [Portal](#tab/portal)
+
+In the operations experience portal dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **System assigned managed identity**.
+
+# [Kubernetes](#tab/kubernetes)
+
+Update the `authentication` section of the DataflowEndpoint Kafka settings to use the `SystemAssignedManagedIdentity` method. In most cases, you can set the `systemAssignedManagedIdentitySettings` with an empty object.
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ {}
+```
+
+This sets the audience to the default value, which is the same as the Event Hubs namespace host value in the form of `https://<NAMESPACE>.servicebus.windows.net`. However, if you need to override the default audience, you can set the `audience` field to the desired value. The audience is the resource that the managed identity is requesting access to. For example:
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: <YOUR-AUDIENCE-OVERRIDE-VALUE>
+```
+++
+#### User-assigned managed identity
+
+# [Portal](#tab/portal)
+
+In the operations experience portal dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **User assigned managed identity**.
+
+Enter the user assigned managed identity client ID and tenant ID in the appropriate fields.
+
+# [Kubernetes](#tab/kubernetes)
+
+To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method.
++
+```yaml
+kafkaSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ {}
+```
+
+<!-- TODO: Add link to WLIF docs -->
+++
+#### Anonymous
+
+To use anonymous authentication, update the `authentication` section of the Kafka settings to use the `Anonymous` method.
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: Anonymous
+ anonymousSettings:
+ {}
+```
+
+## Advanced settings
+
+You can set advanced settings for the Kafka dataflow endpoint such as TLS, trusted CA certificate, Kafka messaging settings, batching, and CloudEvents. You can set these settings in the dataflow endpoint **Advanced** portal tab or within the dataflow endpoint custom resource.
+
+# [Portal](#tab/portal)
+
+In the operations experience portal, select the **Advanced** tab for the dataflow endpoint.
++
+| Setting | Description |
+| | - |
+| Consumer group ID | The ID of the consumer group for the Kafka endpoint. The consumer group ID is used to identify the consumer group that the dataflow uses to read messages from the Kafka topic. The consumer group ID must be unique within the Kafka broker. |
+| Compression | The compression type used for messages sent to Kafka topics. Supported types are `None`, `Gzip`, `Snappy`, and `Lz4`. Compression helps to reduce the network bandwidth and storage space required for data transfer. However, compression also adds some overhead and latency to the process. This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer. |
+| Copy MQTT properties | Whether to copy MQTT message properties to Kafka message headers. For more information, see [Copy MQTT properties](#copy-mqtt-properties). |
+| Kafka acknowledgement | The level of acknowledgement requested from the Kafka broker. Supported values are `None`, `All`, `One`, and `Zero`. For more information, see [Kafka acknowledgements](#kafka-acknowledgements). |
+| Partition handling strategy | The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics. Supported values are `Default`, `Static`, `Topic`, and `Property`. For more information, see [Partition handling strategy](#partition-handling-strategy). |
+| TLS mode enabled | Enables TLS for the Kafka endpoint. |
+| Trusted CA certificate config map | The ConfigMap containing the trusted CA certificate for the Kafka endpoint. This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka dataflow resource. For more information, see [Trusted CA certificate](#trusted-ca-certificate). |
+| Batching enabled | Enables batching. Batching allows you to group multiple messages together and compress them as a single unit, which can improve the compression efficiency and reduce the network overhead. This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer. |
+| Batching latency | The maximum time interval in milliseconds that messages can be buffered before being sent. If this interval is reached, then all buffered messages are sent as a batch, regardless of how many or how large they are. |
+| Maximum bytes | The maximum size in bytes that can be buffered before being sent. If this size is reached, then all buffered messages are sent as a batch, regardless of how many they are or how long they are buffered. |
+| Message count | The maximum number of messages that can be buffered before being sent. If this number is reached, then all buffered messages are sent as a batch, regardless of how large they are or how long they are buffered. |
+| Cloud event attributes | The CloudEvents attributes to include in the Kafka messages. |
+
+# [Kubernetes](#tab/kubernetes)
+
+### TLS settings
+
+Under `kafkaSettings.tls`, you can configure additional settings for the TLS connection to the Kafka broker.
+
+#### TLS mode
+
+To enable or disable TLS for the Kafka endpoint, update the `mode` setting in the TLS settings. For example:
+
+```yaml
+kafkaSettings:
+ tls:
+ mode: Enabled
+```
+
+The TLS mode can be set to `Enabled` or `Disabled`. If the mode is set to `Enabled`, the dataflow uses a secure connection to the Kafka broker. If the mode is set to `Disabled`, the dataflow uses an insecure connection to the Kafka broker.
+
+#### Trusted CA certificate
+
+To configure the trusted CA certificate for the Kafka endpoint, update the `trustedCaCertificateConfigMapRef` setting in the TLS settings. For example:
+
+```yaml
+kafkaSettings:
+ tls:
+ trustedCaCertificateConfigMapRef: <YOUR-CA-CERTIFICATE>
+```
+
+This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka dataflow resource. For example:
+
+```bash
+kubectl create configmap client-ca-configmap --from-file root_ca.crt -n azure-iot-operations
+```
+
+This setting is important if the Kafka broker uses a self-signed certificate or a certificate signed by a custom CA that isn't trusted by default.
+
+However in the case of Azure Event Hubs, the CA certificate isn't required because the Event Hubs service uses a certificate signed by a public CA that is trusted by default.
+
+### Kafka messaging settings
+
+Under `kafkaSettings`, you can configure additional settings for the Kafka endpoint.
+
+#### Consumer group ID
+
+To configure the consumer group ID for the Kafka endpoint, update the `consumerGroupId` setting in the Kafka settings. For example:
+
+```yaml
+spec:
+ kafkaSettings:
+ consumerGroupId: fromMq
+```
+
+The consumer group ID is used to identify the consumer group that the dataflow uses to read messages from the Kafka topic. The consumer group ID must be unique within the Kafka broker.
+
+<!-- TODO: check for accuracy -->
+
+This setting takes effect only if the endpoint is used as a source (that is, the dataflow is a consumer).
+
+#### Compression
+
+To configure the compression type for the Kafka endpoint, update the `compression` setting in the Kafka settings. For example:
+
+```yaml
+kafkaSettings:
+ compression: Gzip
+```
+
+The compression field enables compression for the messages sent to Kafka topics. Compression helps to reduce the network bandwidth and storage space required for data transfer. However, compression also adds some overhead and latency to the process. The supported compression types are listed in the following table.
+
+| Value | Description |
+| -- | -- |
+| `None` | No compression or batching is applied. None is the default value if no compression is specified. |
+| `Gzip` | GZIP compression and batching are applied. GZIP is a general-purpose compression algorithm that offers a good balance between compression ratio and speed. GZIP is the only compression method supported by Azure Event Hubs. |
+| `Snappy` | Snappy compression and batching are applied. Snappy is a fast compression algorithm that offers moderate compression ratio and speed. |
+| `Lz4` | LZ4 compression and batching are applied. LZ4 is a fast compression algorithm that offers low compression ratio and high speed. |
+
+This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer.
+
+#### Batching
+
+Aside from compression, you can also configure batching for messages before sending them to Kafka topics. Batching allows you to group multiple messages together and compress them as a single unit, which can improve the compression efficiency and reduce the network overhead.
+
+| Field | Description | Required |
+| -- | -- | -- |
+| `mode` | Enable batching or not. If not set, the default value is Enabled because Kafka doesn't have a notion of *unbatched* messaging. If set to Disabled, the batching is minimized to create a batch with a single message each time. | No |
+| `latencyMs` | The maximum time interval in milliseconds that messages can be buffered before being sent. If this interval is reached, then all buffered messages are sent as a batch, regardless of how many or how large they are. If not set, the default value is 5. | No |
+| `maxMessages` | The maximum number of messages that can be buffered before being sent. If this number is reached, then all buffered messages are sent as a batch, regardless of how large they are or how long they are buffered. If not set, the default value is 100000. | No |
+| `maxBytes` | The maximum size in bytes that can be buffered before being sent. If this size is reached, then all buffered messages are sent as a batch, regardless of how many they are or how long they are buffered. The default value is 1000000 (1 MB). | No |
+
+An example of using batching is:
+
+```yaml
+kafkaSettings:
+ batching:
+ enabled: true
+ latencyMs: 1000
+ maxMessages: 100
+ maxBytes: 1024
+```
+
+In the example, messages are sent either when there are 100 messages in the buffer, or when there are 1,024 bytes in the buffer, or when 1,000 milliseconds elapse since the last send, whichever comes first.
+
+This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer.
+
+#### Partition handling strategy
+
+The partition handling strategy controls how messages are assigned to Kafka partitions when sending them to Kafka topics. Kafka partitions are logical segments of a Kafka topic that enable parallel processing and fault tolerance. Each message in a Kafka topic has a partition and an offset, which are used to identify and order the messages.
+
+This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer.
+
+<!-- TODO: double check for accuracy -->
+
+By default, a dataflow assigns messages to random partitions, using a round-robin algorithm. However, you can use different strategies to assign messages to partitions based on some criteria, such as the MQTT topic name or an MQTT message property. This can help you to achieve better load balancing, data locality, or message ordering.
+
+| Value | Description |
+| -- | -- |
+| `Default` | Assigns messages to random partitions, using a round-robin algorithm. This is the default value if no strategy is specified. |
+| `Static` | Assigns messages to a fixed partition number that is derived from the instance ID of the dataflow. This means that each dataflow instance sends messages to a different partition. This can help to achieve better load balancing and data locality. |
+| `Topic` | Uses the MQTT topic name from the dataflow source as the key for partitioning. This means that messages with the same MQTT topic name are sent to the same partition. This can help to achieve better message ordering and data locality. |
+| `Property` | Uses an MQTT message property from the dataflow source as the key for partitioning. Specify the name of the property in the `partitionKeyProperty` field. This means that messages with the same property value are sent to the same partition. This can help to achieve better message ordering and data locality based on a custom criterion. |
+
+An example of using partition handling strategy is:
+
+```yaml
+kafkaSettings:
+ partitionStrategy: Property
+ partitionKeyProperty: device-id
+```
+
+This means that messages with the same "device-id" property are sent to the same partition.
+
+#### Kafka acknowledgements
+
+Kafka acknowledgements (acks) are used to control the durability and consistency of messages sent to Kafka topics. When a producer sends a message to a Kafka topic, it can request different levels of acknowledgements from the Kafka broker to ensure that the message is successfully written to the topic and replicated across the Kafka cluster.
+
+This setting takes effect only if the endpoint is used as a destination (that is, the dataflow is a producer).
+
+| Value | Description |
+| -- | -- |
+| `None` | The dataflow doesn't wait for any acknowledgements from the Kafka broker. This is the fastest but least durable option. |
+| `All` | The dataflow waits for the message to be written to the leader partition and all follower partitions. This is the slowest but most durable option. This is also the default option|
+| `One` | The dataflow waits for the message to be written to the leader partition and at least one follower partition. |
+| `Zero` | The dataflow waits for the message to be written to the leader partition but doesn't wait for any acknowledgements from the followers. This is faster than `One` but less durable. |
+
+<!-- TODO: double check for accuracy -->
+
+An example of using Kafka acknowledgements is:
+
+```yaml
+kafkaSettings:
+ kafkaAcks: All
+```
+
+This means that the dataflow waits for the message to be written to the leader partition and all follower partitions.
+
+#### Copy MQTT properties
+
+By default, the copy MQTT properties setting is enabled. These user properties include values such as `subject` that stores the name of the asset sending the message.
+
+```yaml
+kafkaSettings:
+ copyMqttProperties: Enabled
+```
+
+To disable copying MQTT properties, set the value to `Disabled`.
+
+The following sections describe how MQTT properties are translated to Kafka user headers and vice versa when the setting is enabled.
+
+##### Kafka endpoint is a destination
+
+When a Kafka endpoint is a dataflow destination, all MQTT v5 specification defined properties are translated Kafka user headers. For example, an MQTT v5 message with "Content Type" being forwarded to Kafka translates into the Kafka **user header** `"Content Type":{specifiedValue}`. Similar rules apply to other built-in MQTT properties, defined in the following table.
+
+| MQTT Property | Translated Behavior |
+||-|
+| Payload Format Indicator | Key: "Payload Format Indicator" <BR> Value: "0" (Payload is bytes) or "1" (Payload is UTF-8)
+| Response Topic | Key: "Response Topic" <BR> Value: Copy of Response Topic from original message.
+| Message Expiry Interval | Key: "Message Expiry Interval" <BR> Value: UTF-8 representation of number of seconds before message expires. See [Message Expiry Interval property](#the-message-expiry-interval-property) for more details.
+| Correlation Data: | Key: "Correlation Data" <BR> Value: Copy of Correlation Data from original message. Unlike many MQTT v5 properties that are UTF-8 encoded, correlation data can be arbitrary data.
+| Content Type: | Key: "Content Type" <BR> Value: Copy of Content Type from original message.
+
+MQTT v5 user property key value pairs are directly translated to Kafka user headers. If a user header in a message has the same name as a built-in MQTT property (for example, a user header named "Correlation Data") then whether forwarding the MQTT v5 specification property value or the user property is undefined.
+
+Dataflows never receive these properties from an MQTT Broker. Thus, a dataflow never forwards them:
+
+* Topic Alias
+* Subscription Identifiers
+
+###### The Message Expiry Interval property
+
+The [Message Expiry Interval](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901112) specifies how long a message can remain in an MQTT broker before being discarded.
+
+When a dataflow receives an MQTT message with the Message Expiry Interval specified, it:
+
+* Records the time the message was received.
+* Before the message is emitted to the destination, time is subtracted from the message has been queued from the original expiry interval time.
+ * If the message has not yet expired (the operation above is > 0), then the message is emitted to the destination and contains the updated Message Expiry Time.
+ * If the message has expired (the operation above is <= 0), then the message isn't emitted by the Target.
+
+Examples:
+
+* A dataflow receives an MQTT message with Message Expiry Interval = 3600 seconds. The corresponding destination is temporarily disconnected but is able to reconnect. 1,000 seconds pass before this MQTT Message is sent to the Target. In this case, the destination's message has its Message Expiry Interval set as 2600 (3600 - 1000) seconds.
+* The dataflow receives an MQTT message with Message Expiry Interval = 3600 seconds. The corresponding destination is temporarily disconnected but is able to reconnect. In this case, however, it takes 4,000 seconds to reconnect. The message expired and dataflow doesn't forward this message to the destination.
+
+##### Kafka endpoint is a dataflow source
+
+> [!NOTE]
+> There's a known issue when using Event Hubs endpoint as a dataflow source where Kafka header gets corrupted as its translated to MQTT. This only happens if using Event Hub though the Event Hub client which uses AMQP under the covers. For for instance "foo"="bar", the "foo" is translated, but the value becomes"\xa1\x03bar".
+
+When a Kafka endpoint is a dataflow source, Kafka user headers are translated to MQTT v5 properties. The following table describes how Kafka user headers are translated to MQTT v5 properties.
++
+| Kafka Header | Translated Behavior |
+||-|
+| Key | Key: "Key" <BR> Value: Copy of the Key from original message. |
+| Timestamp | Key: "Timestamp" <BR> Value: UTF-8 encoding of Kafka Timestamp, which is number of milliseconds since Unix epoch.
+
+Kafka user header key/value pairs - provided they're all encoded in UTF-8 - are directly translated into MQTT user key/value properties.
+
+###### UTF-8 / Binary Mismatches
+
+MQTT v5 can only support UTF-8 based properties. If dataflow receives a Kafka message that contains one or more non-UTF-8 headers, dataflow will:
+
+* Remove the offending property or properties.
+* Forward the rest of the message on, following the previous rules.
+
+Applications that require binary transfer in Kafka Source headers => MQTT Target properties must first UTF-8 encode them - for example, via Base64.
+
+###### >=64KB property mismatches
+
+MQTT v5 properties must be smaller than 64 KB. If dataflow receives a Kafka message that contains one or more headers that is >= 64KB, dataflow will:
+
+* Remove the offending property or properties.
+* Forward the rest of the message on, following the previous rules.
+
+###### Property translation when using Event Hubs and producers that use AMQP
+
+If you have a client forwarding messages a Kafka dataflow source endpoint doing any of the following actions:
+
+- Sending messages to Event Hubs using client libraries such as *Azure.Messaging.EventHubs*
+- Using AMQP directly
+
+There are property translation nuances to be aware of.
+
+You should do one of the following:
+
+- Avoid sending properties
+- If you must send properties, send values encoded as UTF-8.
+
+When Event Hubs translates properties from AMQP to Kafka, it includes the underlying AMQP encoded types in its message. For more information on the behavior, see [Exchanging Events Between Consumers and Producers Using Different Protocols](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/interop).
+
+In the following code example when the dataflow endpoint receives the value `"foo":"bar"`, it receives the property as `<0xA1 0x03 "bar">`.
+
+```csharp
+using global::Azure.Messaging.EventHubs;
+using global::Azure.Messaging.EventHubs.Producer;
+
+var propertyEventBody = new BinaryData("payload");
+
+var propertyEventData = new EventData(propertyEventBody)
+{
+ Properties =
+ {
+ {"foo", "bar"},
+ }
+};
+
+var propertyEventAdded = eventBatch.TryAdd(propertyEventData);
+await producerClient.SendAsync(eventBatch);
+```
+
+The dataflow endpoint can't forward the payload property `<0xA1 0x03 "bar">` to an MQTT message because the data isn't UTF-8. However if you specify a UTF-8 string, the dataflow endpoint translates the string before sending onto MQTT. If you use a UTF-8 string, the MQTT message would have `"foo":"bar"` as user properties.
+
+Only UTF-8 headers are translated. For example, given the following scenario where the property is set as a float:
+
+```csharp
+Properties =
+{
+ {"float-value", 11.9 },
+}
+```
+
+The dataflow endpoint discards packets that contain the `"float-value"` field.
+
+Not all event data properties including propertyEventData.correlationId are not forwarded. For more information, see [Event User Properties](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/interop#event-user-properties),
+
+### CloudEvents
+
+[CloudEvents](https://cloudevents.io/) are a way to describe event data in a common way. The CloudEvents settings are used to send or receive messages in the CloudEvents format. You can use CloudEvents for event-driven architectures where different services need to communicate with each other in the same or different cloud providers.
+
+The `CloudEventAttributes` options are `Propagate` or`CreateOrRemap`. For example:
+
+```yaml
+mqttSettings:
+ CloudEventAttributes: Propagate # or CreateOrRemap
+```
+
+#### Propagate setting
+
+CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the message is passed through as is. If the required properties are present, a `ce_` prefix is added to the CloudEvent property name.
+
+| Name | Required | Sample value | Output name | Output value |
+| -- | -- | | -- |- |
+| `specversion` | Yes | `1.0` | `ce-specversion` | Passed through as is |
+| `type` | Yes | `ms.aio.telemetry` | `ce-type` | Passed through as is |
+| `source` | Yes | `aio://mycluster/myoven` | `ce-source` | Passed through as is |
+| `id` | Yes | `A234-1234-1234` | `ce-id` | Passed through as is |
+| `subject` | No | `aio/myoven/telemetry/temperature` | `ce-subject` | Passed through as is |
+| `time` | No | `2018-04-05T17:31:00Z` | `ce-time` | Passed through as is. It's not restamped. |
+| `datacontenttype` | No | `application/json` | `ce-datacontenttype` | Changed to the output data content type after the optional transform stage. |
+| `dataschema` | No | `sr://fabrikam-schemas/123123123234234234234234#1.0.0` | `ce-dataschema` | If an output data transformation schema is given in the transformation configuration, `dataschema` is changed to the output schema. |
+
+#### CreateOrRemap setting
+
+CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the properties are generated.
+
+| Name | Required | Output name | Generated value if missing |
+| -- | -- | -- | |
+| `specversion` | Yes | `ce-specversion` | `1.0` |
+| `type` | Yes | `ce-type` | `ms.aio-dataflow.telemetry` |
+| `source` | Yes | `ce-source` | `aio://<target-name>` |
+| `id` | Yes | `ce-id` | Generated UUID in the target client |
+| `subject` | No | `ce-subject` | The output topic where the message is sent |
+| `time` | No | `ce-time` | Generated as RFC 3339 in the target client |
+| `datacontenttype` | No | `ce-datacontenttype` | Changed to the output data content type after the optional transform stage |
+| `dataschema` | No | `ce-dataschema` | Schema defined in the schema registry |
++
iot-operations Howto Configure Local Storage Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-local-storage-endpoint.md
+
+ Title: Configure local storage dataflow endpoint in Azure IoT Operations
+description: Learn how to configure a local storage dataflow endpoint in Azure IoT Operations.
++++ Last updated : 10/02/2024
+ai-usage: ai-assisted
+
+#CustomerIntent: As an operator, I want to understand how to configure a local storage dataflow endpoint so that I can create a dataflow.
++
+# Configure dataflow endpoints for local storage
++
+To send data to local storage in Azure IoT Operations Preview, you can configure a dataflow endpoint. This configuration allows you to specify the endpoint, authentication, table, and other settings.
+
+## Prerequisites
+
+- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md)
+- A [configured dataflow profile](howto-configure-dataflow-profile.md)
+- A [PersistentVolumeClaim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
+
+## Create a local storage dataflow endpoint
+
+Use the local storage option to send data to a locally available persistent volume, through which you can upload data via Edge Storage Accelerator edge volumes.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: esa
+ namespace: azure-iot-operations
+spec:
+ endpointType: localStorage
+ localStorageSettings:
+ persistentVolumeClaimRef: <PVC-NAME>
+```
+
+The PersistentVolumeClaim (PVC) must be in the same namespace as the *DataflowEndpoint*.
+
+# [Bicep](#tab/bicep)
+
+This Bicep template file from [Bicep File for local storage dataflow Tutorial](https://gist.github.com/david-emakenemi/52377e32af1abd0efe41a5da27190a10) deploys the necessary resources for dataflows to local storage.
+
+Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `schemaRegistryName`, `opcuaSchemaName`, and `persistentVCName`.
+
+Next, deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
+
+```azurecli
+az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/<filename>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
+```
+This endpoint is the destination for the dataflow that receives messages to Local storage.
+
+```bicep
+resource localStorageDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: 'local-storage-ep'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'LocalStorage'
+ localStorageSettings: {
+ persistentVolumeClaimRef: persistentVCName
+ }
+ }
+}
+```
++
+## Configure dataflow destination
+
+Once the endpoint is created, you can use it in a dataflow by specifying the endpoint name in the dataflow's destination settings.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: my-dataflow
+ namespace: azure-iot-operations
+spec:
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ *
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: esa
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+{
+ operationType: 'Destination'
+ destinationSettings: {
+ endpointRef: localStorageDataflowEndpoint.name
+ dataDestination: 'sensorData'
+ }
+}
+```
++
+For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
+
+> [!NOTE]
+> Using the local storage endpoint as a source in a dataflow isn't supported. You can use the endpoint as a destination only.
++
+## Supported serialization formats
+
+The only supported serialization format is Parquet.
iot-operations Howto Configure Mqtt Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint.md
+
+ Title: Configure MQTT dataflow endpoints in Azure IoT Operations
+description: Learn how to configure dataflow endpoints for MQTT sources and destinations.
++++ Last updated : 10/02/2024
+ai-usage: ai-assisted
+
+#CustomerIntent: As an operator, I want to understand how to understand how to configure dataflow endpoints for MQTT sources and destinations in Azure IoT Operations so that I can send data to and from MQTT brokers.
++
+# Configure MQTT dataflow endpoints
++
+MQTT dataflow endpoints are used for MQTT sources and destinations. You can configure the endpoint settings, Transport Layer Security (TLS), authentication, and other settings.
+
+## Prerequisites
+
+- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md)
+- A [configured dataflow profile](howto-configure-dataflow-profile.md)
+
+## Azure IoT Operations Local MQTT broker
+
+Azure IoT Operations provides a built-in MQTT broker that you can use with dataflows. When you deploy Azure IoT Operations, a *default* MQTT broker dataflow endpoint is created with default settings. You can use this endpoint as a source or destination for dataflows.
+
+You can also create new local MQTT broker endpoints with custom settings. For example, you can create a new MQTT broker endpoint using a different port, authentication, or other settings.
+
+# [Portal](#tab/portal)
+
+1. In the operations experience portal, select the **Dataflow endpoints**.
+1. Under **Create new dataflow endpoint**, select **Azure IoT Operations Local MQTT** > **New**.
+
+ :::image type="content" source="media/howto-configure-mqtt-endpoint/local-mqtt-endpoint.png" alt-text="Screenshot using operations experience portal to create a new local MQTT dataflow endpoint.":::
+
+ Enter the following settings for the endpoint:
+
+ | Setting | Description |
+ | -- | - |
+ | Name | The name of the dataflow endpoint. |
+ | Host | The hostname and port of the MQTT broker. Use the format `<hostname>:<port>` |
+ | Authentication method | The method used for authentication. Choose *System assigned managed identity*, or *X509 certificate* |
+ | X509 client certificate | The X.509 client certificate used for authentication. Required if using *X509 certificate*. |
+ | X509 client key | The private key corresponding to the X.509 client certificate. Required if using *X509 certificate*. |
+ | X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. Required if using *X509 certificate*. |
+
+# [Kubernetes](#tab/kubernetes)
+
+To configure an MQTT broker endpoint with default settings, you can omit the host field along with other optional fields.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: mq
+ namespace: azure-iot-operations
+spec:
+ endpointType: Mqtt
+ mqttSettings:
+ authentication:
+ method: ServiceAccountToken
+ serviceAccountTokenSettings:
+ audience: aio-internal
+```
+
+This configuration creates a connection to the default MQTT broker with the following settings:
+
+- Host: `aio-broker:18883` through the [default MQTT broker listener](../manage-mqtt-broker/howto-configure-brokerlistener.md#default-brokerlistener)
+- Authentication: service account token (SAT) through the [default BrokerAuthentication resource](../manage-mqtt-broker/howto-configure-authentication.md#default-brokerauthentication-resource)
+- TLS: Enabled
+- Trusted CA certificate: The default CA certificate `aio-ca-key-pair-test-only` from the [Default root CA](../manage-mqtt-broker/howto-configure-tls-auto.md#default-root-ca-and-issuer)
+
+> [!IMPORTANT]
+> If any of these default MQTT broker settings change, the dataflow endpoint must be updated to reflect the new settings. For example, if the default MQTT broker listener changes to use a different service name `my-mqtt-broker` and port 8885, you must update the endpoint to use the new host `host: my-mqtt-broker:8885`. Same applies to other settings like authentication and TLS.
+
+# [Bicep](#tab/bicep)
+
+This Bicep template file from [Bicep File for MQTT-bridge dataflow Tutorial](https://gist.github.com/david-emakenemi/7a72df52c2e7a51d2424f36143b7da85) deploys the necessary dataflow and dataflow endpoints for MQTT broker and Azure Event Grid.
+
+Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `eventGridHostName`.
+
+Next, deploy the resources using the [az stack group](/azure/azure-resource-manager/bicep/deployment-stacks?tabs=azure-powershell) command in your terminal:
+
+```azurecli
+az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/mqtt-bridge.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
+```
+This endpoint is the source for the dataflow that sends messages to Azure Event Grid.
+
+```bicep
+resource MqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: 'aiomq'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'Mqtt'
+ mqttSettings: {
+ authentication: {
+ method: 'ServiceAccountToken'
+ serviceAccountTokenSettings: {
+ audience: 'aio-internal'
+ }
+ }
+ host: 'aio-broker:18883'
+ tls: {
+ mode: 'Enabled'
+ trustedCaCertificateConfigMapRef: 'azure-iot-operations-aio-ca-trust-bundle'
+ }
+ }
+ }
+}
+```
+
+- Host: `aio-broker:18883` through the [default MQTT broker listener](../manage-mqtt-broker/howto-configure-brokerlistener.md#default-brokerlistener)
+- Authentication: service account token (SAT) through the [default BrokerAuthentication resource](../manage-mqtt-broker/howto-configure-authentication.md#default-brokerauthentication-resource)
+- TLS: Enabled
+- Trusted CA certificate: The default CA certificate `azure-iot-operations-aio-ca-trust-bundle` from the [Default root CA](../manage-mqtt-broker/howto-configure-tls-auto.md#default-root-ca-and-issuer)
+++
+## How to configure a dataflow endpoint for MQTT brokers
+
+You can use an MQTT broker dataflow endpoint for dataflow sources and destinations.
+
+### Azure Event Grid
+
+[Azure Event Grid provides a fully managed MQTT broker](../../event-grid/mqtt-overview.md) that works with Azure IoT Operations dataflows.
+
+To configure an Azure Event Grid MQTT broker endpoint, we recommend that you use managed identity for authentication.
+
+# [Portal](#tab/portal)
+
+1. In the operations experience portal, select the **Dataflow endpoints** tab.
+1. Under **Create new dataflow endpoint**, select **Azure Event Grid MQTT** > **New**.
+
+ :::image type="content" source="media/howto-configure-mqtt-endpoint/event-grid-endpoint.png" alt-text="Screenshot using operations experience portal to create an Azure Event Grid endpoint.":::
+
+ Enter the following settings for the endpoint:
+
+ | Setting | Description |
+ | -- | - |
+ | Name | The name of the dataflow endpoint. |
+ | Host | The hostname and port of the MQTT broker. Use the format `<hostname>:<port>` |
+ | Authentication method | The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, or *X509 certificate* |
+ | Client ID | The client ID of the user-assigned managed identity. Required if using *User assigned managed identity*. |
+ | Tenant ID | The tenant ID of the user-assigned managed identity. Required if using *User assigned managed identity*. |
+ | X509 client certificate | The X.509 client certificate used for authentication. Required if using *X509 certificate*. |
+ | X509 client key | The private key corresponding to the X.509 client certificate. Required if using *X509 certificate*. |
+ | X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. Required if using *X509 certificate*. |
+
+1. Select **Apply** to provision the endpoint.
+
+# [Kubernetes](#tab/kubernetes)
+
+1. Create an Event Grid namespace and enable MQTT.
+
+1. Get the managed identity of the Azure IoT Operations Arc extension.
+
+1. Assign the managed identity to the Event Grid namespace or topic space with an appropriate role like `EventGrid TopicSpaces Publisher` or `EventGrid TopicSpaces Subscriber`.
+
+1. This endpoint is the destination for the dataflow that receives messages from the default MQTT Broker.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: eventgrid
+ namespace: azure-iot-operations
+spec:
+ endpointType: Mqtt
+ mqttSettings:
+ host: <NAMESPACE>.<REGION>-1.ts.eventgrid.azure.net:8883
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ {}
+ tls:
+ mode: Enabled
+```
+
+# [Bicep](#tab/bicep)
+
+1. Create an Event Grid namespace and enable MQTT.
+
+1. Get the managed identity of the Azure IoT Operations Arc extension.
+
+1. Assign the managed identity to the Event Grid namespace or topic space with an appropriate role like `EventGrid TopicSpaces Publisher` or `EventGrid TopicSpaces Subscriber`.
+
+1. This endpoint is the destination for the dataflow that receives messages from the default MQTT broker.
+
+```bicep
+resource remoteMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: 'eventgrid'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'Mqtt'
+ mqttSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+ host: eventGridHostName
+ tls: {
+ mode: 'Enabled'
+ }
+ }
+ }
+}
+```
+++
+Once the endpoint is created, you can use it in a dataflow to connect to the Event Grid MQTT broker as a source or destination. The MQTT topics are configured in the dataflow.
+
+#### Use X.509 certificate authentication with Event Grid
+
+We recommended using managed identity for authentication. You can also use X.509 certificate authentication with the Event Grid MQTT broker.
+
+When you use X.509 authentication with an Event Grid MQTT broker, go to the Event Grid namespace > **Configuration** and check these settings:
+
+- **Enable MQTT**: Select the checkbox.
+- **Enable alternative client authentication name sources**: Select the checkbox.
+- **Certificate Subject Name**: Select this option in the dropdown list.
+- **Maximum client sessions per authentication name**: Set to **3** or more.
+
+The alternative client authentication and maximum client sessions options allow dataflows to use client certificate subject name for authentication instead of `MQTT CONNECT Username`. This capability is important so that dataflows can spawn multiple instances and still be able to connect. To learn more, see [Event Grid MQTT client certificate authentication](../../event-grid/mqtt-client-certificate-authentication.md) and [Multi-session support](../../event-grid/mqtt-establishing-multiple-sessions-per-client.md).
+
+#### Event Grid shared subscription limitation
+
+Azure Event Grid MQTT broker doesn't support shared subscriptions, which means that you can't set the `instanceCount` to more than `1` in the dataflow profile. If you set `instanceCount` greater than `1`, the dataflow fails to start.
+
+### Other MQTT brokers
+
+For other MQTT brokers, you can configure the endpoint, TLS, authentication, and other settings as needed.
+
+# [Portal](#tab/portal)
+
+1. In the operations experience portal, select the **Dataflow endpoints** tab.
+1. Under **Create new dataflow endpoint**, select **Custom MQTT Broker** > **New**.
+
+ :::image type="content" source="media/howto-configure-mqtt-endpoint/custom-mqtt-broker.png" alt-text="Screenshot using operations experience portal to create a custom MQTT broker endpoint.":::
+
+1. Enter the following settings for the endpoint:
+
+ | Setting | Description |
+ | | - |
+ | Name | The name of the dataflow endpoint |
+ | Host | The hostname of the MQTT broker endpoint in the format `<hostname>.<port>`. |
+ | Authentication method | The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, or *Service account token*. |
+ | Service audience | The audience for the service account token. Required if using service account token. |
+ | Client ID | The client ID of the user-assigned managed identity. Required if using *User assigned managed identity*. |
+ | Tenant ID | The tenant ID of the user-assigned managed identity. Required if using *User assigned managed identity*. |
+ | Access token secret name | The name of the Kubernetes secret containing the SAS token. Required if using *Access token*. |
+
+1. Select **Apply** to provision the endpoint.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+spec:
+ endpointType: Mqtt
+ mqttSettings:
+ host: <HOST>:<PORT>
+ authentication:
+ ...
+ tls:
+ mode: Enabled
+ trustedCaCertificateConfigMapRef: <YOUR-CA-CERTIFICATE-CONFIG-MAP>
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+properties: {
+ endpointType: 'Mqtt'
+ mqttSettings: {
+ authentication: {
+ ...
+ }
+ host: 'MQTT-BROKER-HOST>:8883'
+ tls: {
+ mode: 'Enabled'
+ trustedCaCertificateConfigMapRef: '<YOUR CA CERTIFICATE CONFIG MAP>'
+ }
+ }
+ }
+```
+++
+## Use the endpoint in a dataflow source or destination
+
+Once you've configured the endpoint, you can use it in a dataflow as both a source or a destination. The MQTT topics are configured in the dataflow source or destination settings, which allows you to reuse the same *DataflowEndpoint* resource with multiple dataflows and different MQTT topics.
+
+# [Portal](#tab/portal)
+
+1. In the Azure IoT Operations Preview portal, create a new dataflow or edit an existing dataflow by selecting the **Dataflows** tab. If creating a new dataflow, select **Create dataflow** and replace `<new-dataflow>` with a name for the dataflow.
+1. In the editor, select **MQTT** as the source dataflow endpoint.
+
+ Enter the following settings for the source endpoint:
+
+ | Setting | Description |
+ | - | - |
+ | MQTT topic | The topic to which the dataflow subscribes (if source) or publishes (if destination). |
+ | Message schema| The schema that defines the structure of the messages being received (if source) or sent (if destination). You can select an existing schema or upload a new schema to the schema registry. |
+
+1. Select the dataflow endpoint for the destination. Choose an existing MQTT dataflow endpoint. For example, the default MQTT Broker endpoint or a custom MQTT broker endpoint.
+1. Select **Proceed** to configure the destination settings.
+1. Enter the MQTT topic to which the dataflow publishes messages.
+1. Select **Apply** to provision the dataflow.
+
+ :::image type="content" source="media/howto-configure-mqtt-endpoint/create-dataflow-mq-mq.png" alt-text="Screenshot using operations experience portal to create a dataflow with an MQTT source and destination.":::
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: my-dataflow
+ namespace: azure-iot-operations
+spec:
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mqsource
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: mqdestination
+ dataDestination:
+ - sensors/thermostats/temperature
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource dataflow 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@2024-08-15-preview' = {
+ parent: defaultDataflowProfile
+ name: 'my-dataflow'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ mode: 'Enabled'
+ operations: [
+ {
+ operationType: 'Source'
+ sourceSettings: {
+ endpointRef: 'mqsource'
+ dataSources: array('thermostats/+/telemetry/temperature/#')
+ }
+ }
+ {
+ operationType: 'Destination'
+ destinationSettings: {
+ endpointRef: 'mqdestination'
+ dataDestination: 'sensors/thermostats/temperature'
+ }
+ }
+ ]
+ }
+}
+```
+++
+For more information about dataflow destination settings, see [Create a dataflow](howto-create-dataflow.md).
+
+To customize the MQTT endpoint settings, see the following sections for more information.
+
+### Available authentication methods
+
+The following authentication methods are available for MQTT broker dataflow endpoints. For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+
+#### X.509 certificate
+
+Many MQTT brokers, like Event Grid, support X.509 authentication. Dataflows can present a client X.509 certificate and negotiate the TLS communication.
+
+To use X.509 certificate authentication, you need to create a secret with the certificate and private key. Use the Kubernetes TLS secret containing the public certificate and private key. For example:
+
+```bash
+kubectl create secret tls my-tls-secret -n azure-iot-operations \
+ --cert=path/to/cert/file \
+ --key=path/to/key/file
+```
+
+# [Portal](#tab/portal)
+
+In the operations experience portal dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **X509 certificate**.
+
+Enter the following settings for the endpoint:
+
+| Setting | Description |
+| | - |
+| X509 client certificate | The X.509 client certificate used for authentication. |
+| X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. |
+| X509 client key | The private key corresponding to the X.509 client certificate. |
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ authentication:
+ method: X509Certificate
+ x509CertificateSettings:
+ secretRef: <YOUR-X509-SECRET-NAME>
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ authentication: {
+ method: 'X509Certificate'
+ x509CertificateSettings:
+ secretRef: '<YOUR-X509-SECRET-NAME>'
+ }
+}
+```
+++
+#### System-assigned managed identity
+
+To use system-assigned managed identity for authentication, you don't need to create a secret. The system-assigned managed identity is used to authenticate with the MQTT broker.
+
+Before you configure the endpoint, make sure that the Azure IoT Operations managed identity has the necessary permissions to connect to the MQTT broker. For example, with Azure Event Grid MQTT broker, assign the managed identity to the Event Grid namespace or topic space with [an appropriate role](../../event-grid/mqtt-client-microsoft-entra-token-and-rbac.md#authorization-to-grant-access-permissions).
+
+Then, configure the endpoint with system-assigned managed identity settings. In most cases when using with Event Grid, you can leave the settings empty as shown in the following example. This sets the managed identity audience to the Event Grid common audience `https://eventgrid.azure.net`.
+
+# [Portal](#tab/portal)
+
+In the operations experience portal dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **System assigned managed identity**.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ {}
+```
+
+If you need to set a different audience, you can specify it in the settings.
+
+```yaml
+mqttSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: https://<AUDIENCE>
+```
+
+# [Bicep](#tab/bicep)
++
+```bicep
+mqttSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+}
+```
+++
+#### User-assigned managed identity
+
+# [Portal](#tab/portal)
+
+In the operations experience portal dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **User assigned managed identity**.
+
+Enter the user assigned managed identity client ID and tenant ID in the appropriate fields.
+
+# [Kubernetes](#tab/kubernetes)
+
+To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity..
+
+```yaml
+mqttSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <ID>
+ tenantId: <ID>
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ authentication: {
+ method: 'UserAssignedManagedIdentity'
+ userAssignedManagedIdentitySettings: {
+ cliendId: '<ID>'
+ tenantId: '<ID>'
+ }
+ }
+}
+```
+++
+#### Kubernetes service account token (SAT)
+
+To use Kubernetes service account token (SAT) for authentication, you don't need to create a secret. The SAT is used to authenticate with the MQTT broker.
+
+# [Portal](#tab/portal)
+
+In the operations experience portal dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **Service account token**.
+
+Enter the service audience.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ authentication:
+ method: ServiceAccountToken
+ serviceAccountTokenSettings:
+ audience: <YOUR-SERVICE-ACCOUNT-AUDIENCE>
+```
+
+# [Bicep](#tab/bicep)
+
+```bicep
+mqttSettings: {
+ authentication: {
+ method: 'ServiceAccountToken'
+ serviceAccountTokenSettings: {
+ audience: '<YOUR-SERVICE-ACCOUNT-AUDIENCE>'
+ }
+ }
+}
+```
+++
+If the audience isn't specified, the default audience for the Azure IoT Operations MQTT broker is used.
+
+#### Anonymous
+
+To use anonymous authentication, set the authentication method to `Anonymous`.
+
+```yaml
+mqttSettings:
+ authentication:
+ method: Anonymous
+ anonymousSettings:
+ {}
+```
+++
+## Advanced settings
+
+You can set advanced settings for the MQTT broker dataflow endpoint such as TLS, trusted CA certificate, MQTT messaging settings, and CloudEvents. You can set these settings in the dataflow endpoint **Advanced** portal tab or within the dataflow endpoint custom resource.
+
+# [Portal](#tab/portal)
+
+In the operations experience portal, select the **Advanced** tab for the dataflow endpoint.
++
+| Setting | Description |
+| | - |
+| Quality of service (QoS) | Defines the level of guarantee for message delivery. Values are 0 (at most once), 1 (at least once). Default is 1. |
+| Keep alive | The keep alive interval (in seconds) is the maximum time that the dataflow client can be idle before sending a PINGREQ message to the broker. The default is 60 seconds. |
+| Maximum in-flight messages | You can set the maximum number of inflight messages that the dataflow MQTT client can have. The default is 100. |
+| Protocol | By default, WebSockets isn't enabled. To use MQTT over WebSockets, set the protocol to WebSockets. |
+| Retain | Specify if the dataflow should keep the retain flag on MQTT messages. The default is Keep.|
+| Session expiry | The session expiry interval (in seconds) is the maximum time that an MQTT session is maintained if the dataflow client disconnects. The default is 3600 seconds. |
+| TLS mode enabled | Indicates whether TLS is enabled for secure communication with the MQTT broker. |
+| Client ID prefix | The client ID is generated by appending the dataflow instance name to the prefix. |
+| Cloud event attributes | For *Propagate*, CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the message is passed through as is. For *Create or re-map*, CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the properties are generated. |
+
+# [Kubernetes](#tab/kubernetes)
+
+### TLS
+
+Under `mqttSettings.tls`, you can configure the TLS settings for the MQTT broker. To enable or disable TLS, set the `mode` field to `Enabled` or `Disabled`.
+
+```yaml
+mqttSettings:
+ tls:
+ mode: Enabled
+```
+
+### Trusted CA certificate
+
+To use a trusted CA certificate for the MQTT broker, you can create a Kubernetes ConfigMap with the CA certificate and reference it in the DataflowEndpoint resource.
+
+```yaml
+mqttSettings:
+ tls:
+ mode: Enabled
+ trustedCaCertificateConfigMapRef: <your CA certificate config map>
+```
+
+This is useful when the MQTT broker uses a self-signed certificate or a certificate that's not trusted by default. The CA certificate is used to verify the MQTT broker's certificate. In the case of Event Grid, its CA certificate is already widely trusted and so you can omit this setting.
+
+### MQTT messaging settings
+
+Under `mqttSettings`, you can configure the MQTT messaging settings for the dataflow MQTT client used with the endpoint.
+
+#### Client ID prefix
+
+You can set a client ID prefix for the MQTT client. The client ID is generated by appending the dataflow instance name to the prefix.
+
+```yaml
+mqttSettings:
+ clientIdPrefix: dataflow
+```
+
+#### QoS
+
+You can set the Quality of Service (QoS) level for the MQTT messages to either 1 or 0. The default is 1.
+
+```yaml
+mqttSettings:
+ qos: 1
+```
+
+#### Retain
+
+Use the `retain` setting to specify whether the dataflow should keep the retain flag on MQTT messages. The default is `Keep`.
+
+```yaml
+mqttSettings:
+ retain: Keep
+```
+
+This setting is useful to ensure that the remote broker has the same message as the local broker, which can be important for Unified Namespace scenarios.
+
+If set to `Never`, the retain flag is removed from the MQTT messages. This can be useful when you don't want the remote broker to retain any messages or if the remote broker doesn't support retain.
+
+The *retain* setting only takes effect if the dataflow uses MQTT endpoint as both source and destination. For example, in an MQTT bridge scenario.
+
+#### Session expiry
+
+You can set the session expiry interval for the dataflow MQTT client. The session expiry interval is the maximum time that an MQTT session is maintained if the dataflow client disconnects. The default is 3600 seconds.
+
+```yaml
+mqttSettings:
+ sessionExpirySeconds: 3600
+```
+
+#### MQTT or WebSockets protocol
+
+By default, WebSockets isn't enabled. To use MQTT over WebSockets, set the `protocol` field to `WebSockets`.
+
+```yaml
+mqttSettings:
+ protocol: WebSockets
+```
+
+#### Max inflight messages
+
+You can set the maximum number of inflight messages that the dataflow MQTT client can have. The default is 100.
+
+```yaml
+mqttSettings:
+ maxInflightMessages: 100
+```
+
+For subscribe when the MQTT endpoint is used as a source, this is the receive maximum. For publish when the MQTT endpoint is used as a destination, this is the maximum number of messages to send before waiting for an acknowledgment.
+
+#### Keep alive
+
+You can set the keep alive interval for the dataflow MQTT client. The keep alive interval is the maximum time that the dataflow client can be idle before sending a PINGREQ message to the broker. The default is 60 seconds.
+
+```yaml
+mqttSettings:
+ keepAliveSeconds: 60
+```
+
+#### CloudEvents
+
+[CloudEvents](https://cloudevents.io/) are a way to describe event data in a common way. The CloudEvents settings are used to send or receive messages in the CloudEvents format. You can use CloudEvents for event-driven architectures where different services need to communicate with each other in the same or different cloud providers.
+
+The `CloudEventAttributes` options are `Propagate` or`CreateOrRemap`.
+
+```yaml
+mqttSettings:
+ CloudEventAttributes: Propagate # or CreateOrRemap
+```
+
+##### Propagate setting
+
+CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the message is passed through as is.
+
+| Name | Required | Sample value | Output value |
+| -- | -- | |-- |
+| `specversion` | Yes | `1.0` | Passed through as is |
+| `type` | Yes | `ms.aio.telemetry` | Passed through as is |
+| `source` | Yes | `aio://mycluster/myoven` | Passed through as is |
+| `id` | Yes | `A234-1234-1234` | Passed through as is |
+| `subject` | No | `aio/myoven/telemetry/temperature` | Passed through as is |
+| `time` | No | `2018-04-05T17:31:00Z` | Passed through as is. It's not restamped. |
+| `datacontenttype` | No | `application/json` | Changed to the output data content type after the optional transform stage. |
+| `dataschema` | No | `sr://fabrikam-schemas/123123123234234234234234#1.0.0` | If an output data transformation schema is given in the transformation configuration, `dataschema` is changed to the output schema. |
+
+##### CreateOrRemap setting
+
+CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the properties are generated.
+
+| Name | Required | Generated value if missing |
+| -- | -- | |
+| `specversion` | Yes | `1.0` |
+| `type` | Yes | `ms.aio-dataflow.telemetry` |
+| `source` | Yes | `aio://<target-name>` |
+| `id` | Yes | Generated UUID in the target client |
+| `subject` | No | The output topic where the message is sent |
+| `time` | No | Generated as RFC 3339 in the target client |
+| `datacontenttype` | No | Changed to the output data content type after the optional transform stage |
+| `dataschema` | No | Schema defined in the schema registry |
++
+# [Bicep](#tab/bicep)
+
+TODO
+
+```bicep
+bicep here
+```
++
iot-operations Howto Create Dataflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-create-dataflow.md
Previously updated : 08/13/2024 Last updated : 10/02/2024
+ai-usage: ai-assisted
#CustomerIntent: As an operator, I want to understand how to create a dataflow to connect data sources.
-# Create a dataflow
+# Configure dataflows in Azure IoT Operations
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-A dataflow is the path that data takes from the source to the destination with optional transformations. You can configure the dataflow by using the Azure IoT Operations portal or by creating a *Dataflow* custom resource. Before you create a dataflow, you must [configure dataflow endpoints for the data sources and destinations](howto-configure-dataflow-endpoint.md).
+A dataflow is the path that data takes from the source to the destination with optional transformations. You can configure the dataflow by creating a *Dataflow* custom resource or using the Azure IoT Operations Studio portal. A dataflow is made up of three parts: the **source**, the **transformation**, and the **destination**.
+
+<!--
+```mermaid
+flowchart LR
+ subgraph Source
+ A[DataflowEndpoint]
+ end
+ subgraph BuiltInTransformation
+ direction LR
+ Datasets - -> Filter
+ Filter - -> Map
+ end
+ subgraph Destination
+ B[DataflowEndpoint]
+ end
+ Source - -> BuiltInTransformation
+ BuiltInTransformation - -> Destination
+```
+-->
++
+To define the source and destination, you need to configure the dataflow endpoints. The transformation is optional and can include operations like enriching the data, filtering the data, and mapping the data to another field.
+
+This article shows you how to create a dataflow with an example, including the source, transformation, and destination.
+
+## Prerequisites
+
+- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md)
+- A [configured dataflow profile](howto-configure-dataflow-profile.md)
+- [Dataflow endpoints](howto-configure-dataflow-endpoint.md). For example, create a dataflow endpoint for the [local MQTT broker](./howto-configure-mqtt-endpoint.md#azure-iot-operations-local-mqtt-broker). You can use this endpoint for both the source and destination. Or, you can try other endpoints like Kafka, Event Hubs, or Azure Data Lake Storage. To learn how to configure each type of dataflow endpoint, see [Configure dataflow endpoints](howto-configure-dataflow-endpoint.md).
+
+## Create dataflow
-The following example is a dataflow configuration with an MQTT source endpoint, transformations, and a Kafka destination endpoint:
+Once you have dataflow endpoints, you can use them to create a dataflow. Recall that a dataflow is made up of three parts: the source, the transformation, and the destination.
+
+# [Portal](#tab/portal)
+
+To create a dataflow in the operations experience portal, select **Dataflow** > **Create dataflow**.
++
+# [Kubernetes](#tab/kubernetes)
+
+The overall structure of a dataflow configuration is as follows:
```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1 kind: Dataflow metadata: name: my-dataflow
+ namespace: azure-iot-operations
spec:
- profileRef: my-dataflow-profile
- mode: enabled
+ profileRef: default
+ mode: Enabled
operations:
- - operationType: source
- name: my-source
+ - operationType: Source
sourceSettings:
- endpointRef: mq
- dataSources:
- - thermostats/+/telemetry/temperature/#
- - humidifiers/+/telemetry/humidity/#
- serializationFormat: json
- - operationType: builtInTransformation
- name: my-transformation
+ # See source configuration section
+ - operationType: BuiltInTransformation
builtInTransformationSettings:
- filter:
- - inputs:
- - 'temperature.Value'
- - '"Tag 10".Value'
- expression: "$1*$2<100000"
- map:
- - inputs:
- - '*'
- output: '*'
- - inputs:
- - temperature.Value
- output: TemperatureF
- expression: cToF($1)
- - inputs:
- - '"Tag 10".Value'
- output: 'Tag 10'
- serializationFormat: json
- - operationType: destination
- name: my-destination
+ # See transformation configuration section
+ - operationType: Destination
destinationSettings:
- endpointRef: kafka
- dataDestination: factory
+ # See destination configuration section
```
-| Name | Description |
-|--|-|
-| `profileRef` | Reference to the [dataflow profile](howto-configure-dataflow-profile.md). |
-| `mode` | Mode of the dataflow: `enabled` or `disabled`. |
-| `operations[]` | Operations performed by the dataflow. |
-| `operationType` | Type of operation: `source`, `destination`, or `builtInTransformation`. |
++
+<!-- TODO: link to API reference -->
Review the following sections to learn how to configure the operation types of the dataflow.
-## Configure source
+## Configure a source with a dataflow endpoint to get data
+
+To configure a source for the dataflow, specify the endpoint reference and data source. You can specify a list of data sources for the endpoint.
+
+# [Portal](#tab/portal)
+
+### Use Asset as a source
+
+You can use an [asset](../discover-manage-assets/overview-manage-assets.md) as the source for the dataflow. This is only available in the operations experience portal.
+
+1. Under **Source details**, select **Asset**.
+1. Select the asset you want to use as the source endpoint.
+1. Select **Proceed**.
+
+ A list of datapoints for the selected asset is displayed.
+
+ :::image type="content" source="media/howto-create-dataflow/dataflow-source-asset.png" alt-text="Screenshot using operations experience portal to select an asset as the source endpoint.":::
+
+1. Select **Apply** to use the asset as the source endpoint.
+
+# [Kubernetes](#tab/kubernetes)
+
+Configuring an asset as a source is only available in the operations experience portal.
++
-To configure a source for the dataflow, specify the endpoint reference and data source. You can specify a list of data sources for the endpoint. For example, MQTT or Kafka topics. The following definition is an example of a dataflow configuration with a source endpoint and data source:
+### Use MQTT as a source
+
+# [Portal](#tab/portal)
+
+1. Under **Source details**, select **MQTT**.
+1. Enter the **MQTT Topic** that you want to listen to for incoming messages.
+1. Choose a **Message schema** from the dropdown list or upload a new schema. If the source data has optional fields or fields with different types, specify a deserialization schema to ensure consistency. For example, the data might have fields that aren't present in all messages. Without the schema, the transformation can't handle these fields as they would have empty values. With the schema, you can specify default values or ignore the fields.
+
+ :::image type="content" source="media/howto-create-dataflow/dataflow-source-mqtt.png" alt-text="Screenshot using operations experience portal to select MQTT as the source endpoint.":::
+
+1. Select **Apply**.
+
+# [Kubernetes](#tab/kubernetes)
+
+For example, to configure a source using an MQTT endpoint and two MQTT topic filters, use the following configuration:
```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
-metadata:
- name: mq-to-kafka
- namespace: azure-iot-operations
-spec:
- profileRef: example-dataflow
- operations:
- - operationType: source
- sourceSettings:
- endpointRef: mq-source
- dataSources:
- - azure-iot-operations/data/thermostat
+sourceSettings:
+ endpointRef: mq
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - humidifiers/+/telemetry/humidity/#
```
-| Name | Description |
-|--||
-| `operationType` | `source` |
-| `sourceSettings` | Settings for the `source` operation. |
-| `sourceSettings.endpointRef` | Reference to the `source` endpoint. |
-| `sourceSettings.dataSources` | Data sources for the `source` operation. Wildcards ( `#` and `+` ) are supported. |
+Because `dataSources` allows you to specify MQTT or Kafka topics without modifying the endpoint configuration, you can reuse the endpoint for multiple dataflows even if the topics are different. To learn more, see [Reuse dataflow endpoints](./howto-configure-dataflow-endpoint.md#reuse-endpoints).
+
+<!-- TODO: link to API reference -->
-## Configure transformation
+#### Specify schema to deserialize data
-The transformation operation is where you can transform the data from the source before you send it to the destination. Transformations are optional. If you don't need to make changes to the data, don't include the transformation operation in the dataflow configuration. Multiple transformations are chained together in stages regardless of the order in which they're specified in the configuration.
+If the source data has optional fields or fields with different types, specify a deserialization schema to ensure consistency. For example, the data might have fields that aren't present in all messages. Without the schema, the transformation can't handle these fields as they would have empty values. With the schema, you can specify default values or ignore the fields.
```yaml spec: operations:
- - operationType: builtInTransformation
- name: transform1
- builtInTransformationSettings:
- datasets:
- # ...
- filter:
- # ...
- map:
- # ...
+ - operationType: Source
+ sourceSettings:
+ serializationFormat: Json
+ schemaRef: aio-sr://exampleNamespace/exampleAvroSchema:1.0.0
+```
+
+To specify the schema, create the file and store it in the schema registry.
+
+```yaml
+{
+ "type": "record",
+ "name": "Temperature",
+ "fields": [
+ {"name": "deviceId", "type": "string"},
+ {"name": "temperature", "type": "float"}
+ ]
+}
+```
+
+> [!NOTE]
+> The only supported serialization format is JSON. The schema is optional.
+
+For more information about schema registry, see [Understand message schemas](concept-schema-registry.md).
+
+#### Shared subscriptions
+
+<!-- TODO: may not be final -->
+
+To use shared subscriptions with MQTT sources, you can specify the shared subscription topic in the form of `$shared/<subscription-group>/<topic>`.
+
+```yaml
+sourceSettings:
+ dataSources:
+ - $shared/myGroup/thermostats/+/telemetry/temperature/#
+```
+
+> [!NOTE]
+> If the instance count in the [dataflow profile](howto-configure-dataflow-profile.md) is greater than 1, then the shared subscription topic must be used.
+
+<!-- TODO: Details -->
+++
+## Configure transformation to process data
+
+The transformation operation is where you can transform the data from the source before you send it to the destination. Transformations are optional. If you don't need to make changes to the data, don't include the transformation operation in the dataflow configuration. Multiple transformations are chained together in stages regardless of the order in which they're specified in the configuration. The order of the stages is always
+
+1. **Enrich**: Add additional data to the source data given a dataset and condition to match.
+1. **Filter**: Filter the data based on a condition.
+1. **Map**: Move data from one field to another with an optional conversion.
+
+# [Portal](#tab/portal)
+
+In the operations experience portal, select **Dataflow** > **Add transform (optional)**.
++
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+builtInTransformationSettings:
+ datasets:
+ # ...
+ filter:
+ # ...
+ map:
+ # ...
```
-| Name | Description |
-|--||
-| `operationType` | `builtInTransformation` |
-| `name` | Name of the transformation. |
-| `builtInTransformationSettings` | Settings for the `builtInTransformation` operation. |
-| `builtInTransformationSettings.datasets` | Add other data to the source data given a dataset and condition to match. |
-| `builtInTransformationSettings.filter` | Filter the data based on a condition. |
-| `builtInTransformationSettings.map` | Move data from one field to another with an optional conversion. |
+<!-- TODO: link to API reference -->
+++ ### Enrich: Add reference data To enrich the data, you can use the reference dataset in the Azure IoT Operations [distributed state store (DSS)](../create-edge-apps/concept-about-state-store-protocol.md). The dataset is used to add extra data to the source data based on a condition. The condition is specified as a field in the source data that matches a field in the dataset.
-| Name | Description |
-||-|
-| `builtInTransformationSettings.datasets.key` | Dataset used for enrichment (key in DSS). |
-| `builtInTransformationSettings.datasets.expression` | Condition for the enrichment operation. |
- Key names in the distributed state store correspond to a dataset in the dataflow configuration.
+# [Portal](#tab/portal)
+
+Currently, the enrich operation isn't available in the operations experience portal.
+
+# [Kubernetes](#tab/kubernetes)
+ For example, you could use the `deviceId` field in the source data to match the `asset` field in the dataset: ```yaml
-spec:
- operations:
- - operationType: builtInTransformation
- name: transform1
- builtInTransformationSettings:
- datasets:
- - key: assetDataset
- inputs:
- - $source.deviceId # - $1
- - $context(assetDataset).asset # - $2
- expression: $1 == $2
+builtInTransformationSettings:
+ datasets:
+ - key: assetDataset
+ inputs:
+ - $source.deviceId # - $1
+ - $context(assetDataset).asset # - $2
+ expression: $1 == $2
``` If the dataset has a record with the `asset` field, similar to:
If the dataset has a record with the `asset` field, similar to:
The data from the source with the `deviceId` field matching `thermostat1` has the `location` and `manufacturer` fields available in `filter` and `map` stages.
+<!-- TODO: link to API reference -->
+++ You can load sample data into the DSS by using the [DSS set tool sample](https://github.com/Azure-Samples/explore-iot-operations/tree/main/samples/dss_set). For more information about condition syntax, see [Enrich data by using dataflows](concept-dataflow-enrich.md) and [Convert data using dataflows](concept-dataflow-conversions.md).
For more information about condition syntax, see [Enrich data by using dataflows
To filter the data on a condition, you can use the `filter` stage. The condition is specified as a field in the source data that matches a value.
-| Name | Description |
-||-|
-| `builtInTransformationSettings.filter.inputs[]` | Inputs to evaluate a filter condition. |
-| `builtInTransformationSettings.filter.expression` | Condition for the filter evaluation. |
+# [Portal](#tab/portal)
+
+1. Under **Transform (optional)**, select **Filter** > **Add**.
+1. Choose the datapoints to include in the dataset.
+1. Add a filter condition and description.
+
+ :::image type="content" source="media/howto-create-dataflow/dataflow-filter.png" alt-text="Screenshot using operations experience portal to add a filter transform.":::
+
+1. Select **Apply**.
+
+# [Kubernetes](#tab/kubernetes)
For example, you could use the `temperature` field in the source data to filter the data: ```yaml
-spec:
- operations:
- - operationType: builtInTransformation
- name: transform1
- builtInTransformationSettings:
- filter:
- - inputs:
- - temperature ? $last # - $1
- expression: "$1 > 20"
+builtInTransformationSettings:
+ filter:
+ - inputs:
+ - temperature ? $last # - $1
+ expression: "$1 > 20"
``` If the `temperature` field is greater than 20, the data is passed to the next stage. If the `temperature` field is less than or equal to 20, the data is filtered.
+<!-- TODO: link to API reference -->
+++ ### Map: Move data from one field to another To map the data to another field with optional conversion, you can use the `map` operation. The conversion is specified as a formula that uses the fields in the source data.
-| Name | Description |
-||-|
-| `builtInTransformationSettings.map[].inputs[]` | Inputs for the map operation |
-| `builtInTransformationSettings.map[].output` | Output field for the map operation |
-| `builtInTransformationSettings.map[].expression` | Conversion formula for the map operation |
+# [Portal](#tab/portal)
+
+In the operations experience portal, mapping is currently supported using **Compute** transforms.
+
+1. Under **Transform (optional)**, select **Compute** > **Add**.
+1. Enter the required fields and expressions.
+
+ :::image type="content" source="media/howto-create-dataflow/dataflow-compute.png" alt-text="Screenshot using operations experience portal to add a compute transform.":::
+
+1. Select **Apply**.
+
+# [Kubernetes](#tab/kubernetes)
For example, you could use the `temperature` field in the source data to convert the temperature to Celsius and store it in the `temperatureCelsius` field. You could also enrich the source data with the `location` field from the contextualization dataset: ```yaml
-spec:
- operations:
- - operationType: builtInTransformation
- name: transform1
- builtInTransformationSettings:
- map:
- - inputs:
- - temperature # - $1
- output: temperatureCelsius
- expression: "($1 - 32) * 5/9"
- - inputs:
- - $context(assetDataset).location
- output: location
+builtInTransformationSettings:
+ map:
+ - inputs:
+ - temperature # - $1
+ output: temperatureCelsius
+ expression: "($1 - 32) * 5/9"
+ - inputs:
+ - $context(assetDataset).location
+ output: location
```
+<!-- TODO: link to API reference -->
+++ To learn more, see [Map data by using dataflows](concept-dataflow-mapping.md) and [Convert data by using dataflows](concept-dataflow-conversions.md).
-## Configure destination
+### Serialize data according to a schema
+
+If you want to serialize the data before sending it to the destination, you need to specify a schema and serialization format. Otherwise, the data is serialized in JSON with the types inferred. Remember that storage endpoints like Microsoft Fabric or Azure Data Lake require a schema to ensure data consistency.
-To configure a destination for the dataflow, you need to specify the endpoint and a path (topic or table) for the destination.
+# [Portal](#tab/portal)
-| Name | Description |
-|--|-|
-| `destinationSettings.endpointRef` | Reference to the `destination` endpoint |
-| `destinationSettings.dataDestination` | Destination for the data |
+Specify the **Output** schema when you add the destination dataflow endpoint.
-### Configure destination endpoint reference
+# [Kubernetes](#tab/kubernetes)
-To configure the endpoint for the destination, you need to specify the ID and endpoint reference:
```yaml
-spec:
- operations:
- - operationType: destination
- name: destination1
- destinationSettings:
- endpointRef: eventgrid
+builtInTransformationSettings:
+ serializationFormat: Parquet
+ schemaRef: aio-sr://<NAMESPACE>/<SCHEMA>:<VERSION>
+```
+
+To specify the schema, you can create a Schema custom resource with the schema definition.
+
+For more information about schema registry, see [Understand message schemas](concept-schema-registry.md).
++
+```json
+{
+ "type": "record",
+ "name": "Temperature",
+ "fields": [
+ {"name": "deviceId", "type": "string"},
+ {"name": "temperatureCelsius", "type": "float"}
+ {"name": "location", "type": "string"}
+ ]
+}
```
-### Configure destination path
++
+Supported serialization formats are JSON, Parquet, and Delta.
+
+## Configure destination with a dataflow endpoint to send data
+
+To configure a destination for the dataflow, specify the endpoint reference and data destination. You can specify a list of data destinations for the endpoint which are MQTT or Kafka topics.
+
+# [Portal](#tab/portal)
+
+1. Select the dataflow endpoint to use as the destination.
-After you have the endpoint, you can configure the path for the destination. If the destination is an MQTT or Kafka endpoint, use the path to specify the topic:
+ :::image type="content" source="media/howto-create-dataflow/dataflow-destination.png" alt-text="Screenshot using operations experience portal to select Event Hubs destination endpoint.":::
+
+1. Select **Proceed** to configure the destination.
+1. Add the mapping details based on the type of destination.
+
+# [Kubernetes](#tab/kubernetes)
+
+For example, to configure a destination using the MQTT endpoint created earlier and a static MQTT topic, use the following configuration:
```yaml-- operationType: destination
- destinationSettings:
- endpointRef: eventgrid
- dataDestination: factory
+destinationSettings:
+ endpointRef: mq
+ dataDestination: factory
```
-For storage endpoints like Microsoft Fabric, use the path to specify the table name:
+If you've created storage endpoints like Microsoft Fabric, use the data destination field to specify the table or container name:
```yaml-- operationType: destination
- destinationSettings:
- endpointRef: adls
- dataDestination: telemetryTable
+destinationSettings:
+ endpointRef: adls
+ dataDestination: telemetryTable
+```
+
+## Example
+
+The following example is a dataflow configuration that uses the MQTT endpoint for the source and destination. The source filters the data from the MQTT topics `thermostats/+/telemetry/temperature/#` and `humidifiers/+/telemetry/humidity/#`. The transformation converts the temperature to Fahrenheit and filters the data where the temperature is less than 100000. The destination sends the data to the MQTT topic `factory`.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: my-dataflow
+ namespace: azure-iot-operations
+spec:
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - humidifiers/+/telemetry/humidity/#
+ - operationType: builtInTransformation
+ builtInTransformationSettings:
+ filter:
+ - inputs:
+ - 'temperature.Value'
+ - '"Tag 10".Value'
+ expression: "$1*$2<100000"
+ map:
+ - inputs:
+ - '*'
+ output: '*'
+ - inputs:
+ - temperature.Value
+ output: TemperatureF
+ expression: cToF($1)
+ - inputs:
+ - '"Tag 10".Value'
+ output: 'Tag 10'
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: mq
+ dataDestination: factory
+```
+
+<!-- TODO: add links to examples in the reference docs -->
+++
+## Verify a dataflow is working
+
+Follow [Tutorial: Bi-directional MQTT bridge to Azure Event Grid](tutorial-mqtt-bridge.md) to verify the dataflow is working.
+
+### Export dataflow configuration
+
+To export the dataflow configuration, you can use the operations experience portal or by exporting the Dataflow custom resource.
+
+# [Portal](#tab/portal)
+
+Select the dataflow you want to export and select **Export** from the toolbar.
++
+# [Kubernetes](#tab/kubernetes)
+
+```bash
+kubectl get dataflow my-dataflow -o yaml > my-dataflow.yaml
``` +
iot-operations Overview Dataflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/overview-dataflow.md
The configuration is specified by using Kubernetes CRDs. Based on this configura
By using dataflows, you can efficiently manage your data paths. You can ensure that data is accurately sent, transformed, and enriched to meet your operational needs.
+## Schema registry
+
+Schema registry, a feature provided by Azure Device Registry Preview, is a synchronized repository in the cloud and at the edge. The schema registry stores the definitions of messages coming from edge assets, and then exposes an API to access those schemas at the edge. Southbound connectors like the OPC UA connector can create message schemas and add them to the schema registry or customers can upload schemas to the operations experience web UI.
+
+Dataflows uses messages schemas at both the source and destination points. For sources, message schemas can work as filters to identify the specific messages that you want to capture for a dataflow. For destinations, message schemas help to transform the message into the format expected by the destination endpoint.
+
+For more information, see [Understand message schemas](./concept-schema-registry.md).
+ ## Related content - [Quickstart: Send asset telemetry to the cloud by using a dataflow](../get-started-end-to-end-sample/quickstart-upload-telemetry-to-cloud.md)
iot-operations Tutorial Mqtt Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/tutorial-mqtt-bridge.md
+
+ Title: Bi-directional MQTT bridge to Azure Event Grid
+description: Learn how to create a bi-directional MQTT bridge to Azure Event Grid using Azure IoT Operations dataflows.
++++ Last updated : 10/01/2024+
+#CustomerIntent: As an operator, I want to understand how to create a bi-directional MQTT bridge to Azure Event Grid so that I can send and receive messages between devices and services.
++
+# Tutorial: Bi-directional MQTT bridge to Azure Event Grid
++
+In this tutorial, you set up a bi-directional MQTT bridge between an Azure IoT Operations MQTT broker and Azure Event Grid. To keep the tutorial simple, use the default settings for the Azure IoT Operations MQTT broker and Azure Event Grid endpoints, and no transformation is applied.
+
+## Prerequisites
+
+- **Azure IoT Operations**. See [Deploy Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md).
+- **Dataflow profile**. See [Configure dataflow profile](howto-configure-dataflow-profile.md).
+
+## Set environment variables
+
+Sign in with Azure CLI:
+
+```azurecli
+az login
+```
+
+Set environment variables for the rest of the setup. Replace values in `<>` with valid values or names of your choice. A new Azure Event Grid namespace and topic space are created in your Azure subscription based on the names you provide:
+
+```azurecli
+# For this tutorial, the steps assume the IoT Operations cluster and the Event Grid
+# are in the same subscription, resource group, and location.
+
+# Name of the resource group of Azure Event Grid and IoT Operations cluster
+export RESOURCE_GROUP=<RESOURCE_GROUP_NAME>
+
+# Azure region of Azure Event Grid and IoT Operations cluster
+export LOCATION=<LOCATION>
+
+# Name of the Azure Event Grid namespace
+export EVENT_GRID_NAMESPACE=<EVENT_GRID_NAMESPACE>
+
+# Name of the Arc-enabled IoT Operations cluster
+export CLUSTER_NAME=<CLUSTER_NAME>
+
+# Subscription ID of Azure Event Grid and IoT Operations cluster
+export SUBSCRIPTION_ID=<SUBSCRIPTION_ID>
+```
+
+## Create Event Grid namespace with MQTT broker enabled
+
+[Create Event Grid namespace](../../event-grid/create-view-manage-namespaces.md) with Azure CLI. The location should be the same as the one you used to deploy Azure IoT Operations.
+
+```azurecli
+az eventgrid namespace create \
+ --namespace-name $EVENT_GRID_NAMESPACE \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --topic-spaces-configuration "{state:Enabled,maximumClientSessionsPerAuthenticationName:3}"
+```
+
+By setting the `topic-spaces-configuration`, this command creates a namespace with:
+
+* MQTT broker **enabled**
+* Maximum client sessions per authentication name as **3**.
+
+The max client sessions option allows Azure IoT Operations MQTT to spawn multiple instances and still connect. To learn more, see [multi-session support](../../event-grid/mqtt-establishing-multiple-sessions-per-client.md).
+
+## Create a topic space
+
+In the Event Grid namespace, create a topic space named `tutorial` with a topic template `telemetry/#`.
+
+```azurecli
+az eventgrid namespace topic-space create \
+ --resource-group $RESOURCE_GROUP \
+ --namespace-name $EVENT_GRID_NAMESPACE \
+ --name tutorial \
+ --topic-templates "telemetry/#"
+```
+
+By using the `#` wildcard in the topic template, you can publish to any topic under the `telemetry` topic space. For example, `telemetry/temperature` or `telemetry/humidity`.
+
+## Give Azure IoT Operations access to the Event Grid topic space
+
+Using Azure CLI, find the principal ID for the Azure IoT Operations Arc extension. The command stores the principal ID in a variable for later use.
+
+```azurecli
+export PRINCIPAL_ID=$(az k8s-extension list \
+ --resource-group $RESOURCE_GROUP \
+ --cluster-name <CLUSTER-NAME> \
+ --cluster-type connectedClusters \
+ --query "[?extensionType=='microsoft.iotoperations'].identity.principalId | [0]" -o tsv)
+echo $PRINCIPAL_ID
+```
+
+Take note of the output value for `identity.principalId`, which is a GUID value with the following format:
+
+```output
+d84481ae-9181-xxxx-xxxx-xxxxxxxxxxxx
+```
+
+Then, use Azure CLI to assign publisher and subscriber roles to Azure IoT Operations MQTT for the topic space you created.
+
+Assign the publisher role:
+
+```azurecli
+az role assignment create \
+ --assignee $PRINCIPAL_ID \
+ --role "EventGrid TopicSpaces Publisher" \
+ --scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.EventGrid/namespaces/$EVENT_GRID_NAMESPACE/topicSpaces/tutorial
+```
+
+Assign the subscriber role:
+
+```azurecli
+az role assignment create \
+ --assignee $PRINCIPAL_ID \
+ --role "EventGrid TopicSpaces Subscriber" \
+ --scope /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.EventGrid/namespaces/$EVENT_GRID_NAMESPACE/topicSpaces/tutorial
+```
+
+> [!TIP]
+> The scope matches the `id` of the topic space you created with `az eventgrid namespace topic-space create` in the previous step, and you can find it in the output of the command.
+
+## Event Grid MQTT broker hostname
+
+Use Azure CLI to get the Event Grid MQTT broker hostname.
+
+```azurecli
+az eventgrid namespace show \
+ --resource-group $RESOURCE_GROUP \
+ --namespace-name $EVENT_GRID_NAMESPACE \
+ --query topicSpacesConfiguration.hostname \
+ -o tsv
+```
+
+Take note of the output value for `topicSpacesConfiguration.hostname` that is a hostname value that looks like:
+
+```output
+example.region-1.ts.eventgrid.azure.net
+```
+
+## Create an Azure IoT Operations MQTT broker dataflow endpoint
+
+# [Bicep](#tab/bicep)
+
+The dataflow and dataflow endpoints for MQTT broker and Azure Event Grid can be deployed as standard Azure resources since they have Azure Resource Provider (RPs) implementations. This Bicep template file from [Bicep File for MQTT-bridge dataflow Tutorial](https://gist.github.com/david-emakenemi/7a72df52c2e7a51d2424f36143b7da85) deploys the necessary dataflow and dataflow endpoints.
+
+Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `eventGridHostName` with yours.
+
+Next, execute the following command in your terminal:
+
+```azurecli
+az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/mqtt-bridge.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
+```
+This endpoint is the source for the dataflow that sends messages to Azure Event Grid.
+
+```bicep
+resource MqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: 'aiomq'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'Mqtt'
+ mqttSettings: {
+ authentication: {
+ method: 'ServiceAccountToken'
+ serviceAccountTokenSettings: {
+ audience: 'aio-internal'
+ }
+ }
+ host: 'aio-broker:18883'
+ tls: {
+ mode: 'Enabled'
+ trustedCaCertificateConfigMapRef: 'azure-iot-operations-aio-ca-trust-bundle'
+ }
+ }
+ }
+}
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create dataflow endpoint for the Azure IoT Operations built-in MQTT broker. This endpoint is the source for the dataflow that sends messages to Azure Event Grid.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: mq
+ namespace: azure-iot-operations
+spec:
+ endpointType: Mqtt
+ mqttSettings:
+ authentication:
+ method: ServiceAccountToken
+ serviceAccountTokenSettings: {}
+```
+++
+This is the default configuration for the Azure IoT Operations MQTT broker endpoint. The authentication method is set to `ServiceAccountToken` to use the built-in service account token for authentication.
+
+## Create an Azure Event Grid dataflow endpoint
+
+# [Bicep](#tab/bicep)
+
+Since you already deployed the resources in the previous section, there's no additional deployment needed. However, this endpoint is the destination for the dataflow that sends messages to Azure Event Grid. Replace `<EVENT-GRID-HOSTNAME>` with the hostname you got from the previous step. Include the port number `8883`.
+
+```bicep
+resource remoteMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
+ parent: aioInstance
+ name: 'eventgrid'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ endpointType: 'Mqtt'
+ mqttSettings: {
+ authentication: {
+ method: 'SystemAssignedManagedIdentity'
+ systemAssignedManagedIdentitySettings: {}
+ }
+ host: '<NAMESPACE>.<REGION>-1.ts.eventgrid.azure.net:8883'
+ tls: {
+ mode: 'Enabled'
+ }
+ }
+ }
+}
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create dataflow endpoint for the Azure Event Grid. This endpoint is the destination for the dataflow that sends messages to Azure Event Grid. Replace `<EVENT-GRID-HOSTNAME>` with the hostname you got from the previous step. Include the port number `8883`.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: eventgrid
+ namespace: azure-iot-operations
+spec:
+ endpointType: Mqtt
+ mqttSettings:
+ host: <EVENT-GRID-HOSTNAME>:8883
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+ tls:
+ mode: Enabled
+```
+++
+Here, the authentication method is set to `SystemAssignedManagedIdentity` to use the managed identity of the Azure IoT Operations extension to authenticate with the Event Grid MQTT broker. This setting works because the Azure IoT Operations extension has the necessary permissions to publish and subscribe to the Event Grid topic space configured through Azure RBAC roles. Notice that no secrets, like username or password, are needed in the configuration.
+
+Since the Event Grid MQTT broker requires TLS, the `tls` setting is enabled. No need to provide a trusted CA certificate, as the Event Grid MQTT broker uses a widely trusted certificate authority.
+
+## Create dataflows
+
+# [Bicep](#tab/bicep)
+
+In this example, there are two dataflows with the Azure IoT Operations MQTT broker endpoint as the source and the Azure Event Grid endpoint as the destination, and vice versa. No need to configure transformation.
+
+```bicep
+resource dataflow_1 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@2024-08-15-preview' = {
+ parent: defaultDataflowProfile
+ name: 'local-to-remote'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ mode: 'Enabled'
+ operations: [
+ {
+ operationType: 'Source'
+ sourceSettings: {
+ endpointRef: MqttBrokerDataflowEndpoint.name
+ dataSources: array('tutorial/local')
+ }
+ }
+ {
+ operationType: 'Destination'
+ destinationSettings: {
+ endpointRef: remoteMqttBrokerDataflowEndpoint.name
+ dataDestination: 'telemetry/iot-mq'
+ }
+ }
+ ]
+ }
+}
+```
+
+```bicep
+resource dataflow_2 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@2024-08-15-preview' = {
+ parent: defaultDataflowProfile
+ name: 'remote-to-local'
+ extendedLocation: {
+ name: customLocation.id
+ type: 'CustomLocation'
+ }
+ properties: {
+ mode: 'Enabled'
+ operations: [
+ {
+ operationType: 'Source'
+ sourceSettings: {
+ endpointRef: remoteMqttBrokerDataflowEndpoint.name
+ dataSources: array('telemetry/#')
+ }
+ }
+ {
+ operationType: 'Destination'
+ destinationSettings: {
+ endpointRef: MqttBrokerDataflowEndpoint.name
+ dataDestination: 'tutorial/cloud'
+ }
+ }
+ ]
+ }
+}
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create two dataflows with the Azure IoT Operations MQTT broker endpoint as the source and the Azure Event Grid endpoint as the destination, and vice versa. No need to configure transformation.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: local-to-remote
+ namespace: azure-iot-operations
+spec:
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: mq
+ dataSources:
+ - tutorial/local
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: eventgrid
+ dataDestination: telemetry/iot-mq
+
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: remote-to-local
+ namespace: azure-iot-operations
+spec:
+ operations:
+ - operationType: Source
+ sourceSettings:
+ endpointRef: eventgrid
+ dataSources:
+ - telemetry/#
+ - operationType: Destination
+ destinationSettings:
+ endpointRef: mq
+ dataDestination: tutorial/cloud
+```
+++
+Together, the two dataflows form an MQTT bridge, where you:
+
+* Use the Event Grid MQTT broker as the remote broker
+* Use the local Azure IoT Operations MQTT broker as the local broker
+* Use TLS for both remote and local brokers
+* Use system-assigned managed identity for authentication to the remote broker
+* Use Kubernetes service account for authentication to the local broker
+* Use the topic map to map the `tutorial/local` topic to the `telemetry/iot-mq` topic on the remote broker
+* Use the topic map to map the `telemetry/#` topic on the remote broker to the `tutorial/cloud` topic on the local broker
+
+When you publish to the `tutorial/local` topic on the local Azure IoT Operations MQTT broker, the message is bridged to the `telemetry/iot-mq` topic on the remote Event Grid MQTT broker. Then, the message is bridged back to the `tutorial/cloud` topic (because the `telemetry/#` wildcard topic captures it) on the local Azure IoT Operations MQTT broker. Similarly, when you publish to the `telemetry/iot-mq` topic on the remote Event Grid MQTT broker, the message is bridged to the `tutorial/cloud` topic on the local Azure IoT Operations MQTT broker.
+
+## Deploy MQTT client
+
+To verify the MQTT bridge is working, deploy an MQTT client to the same namespace as Azure IoT Operations. In a new file named `client.yaml`, specify the client deployment:
++
+<!-- TODO: put this in the explore-iot-operations repo? -->
+<!-- TODO: make the service account part of the YAML? -->
+
+# [Bicep](#tab/bicep)
+
+Currently, bicep doesn't apply to deploy MQTT client.
+
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mqtt-client
+ # Namespace must match MQTT broker BrokerListener's namespace
+ # Otherwise use the long hostname: aio-broker.azure-iot-operations.svc.cluster.local
+ namespace: azure-iot-operations
+spec:
+ # Use the "mqtt-client" service account which comes with default deployment
+ # Otherwise create it with `kubectl create serviceaccount mqtt-client -n azure-iot-operations`
+ serviceAccountName: mqtt-client
+ containers:
+ # Mosquitto and mqttui on Alpine
+ - image: alpine
+ name: mqtt-client
+ command: ["sh", "-c"]
+ args: ["apk add mosquitto-clients mqttui && sleep infinity"]
+ volumeMounts:
+ - name: mq-sat
+ mountPath: /var/run/secrets/tokens
+ - name: trust-bundle
+ mountPath: /var/run/certs
+ volumes:
+ - name: mq-sat
+ projected:
+ sources:
+ - serviceAccountToken:
+ path: mq-sat
+ audience: aio-internal # Must match audience in BrokerAuthentication
+ expirationSeconds: 86400
+ - name: trust-bundle
+ configMap:
+ name: aio-ca-trust-bundle-test-only # Default root CA cert
+```
+
+Apply the deployment file with kubectl.
+
+```bash
+kubectl apply -f client.yaml
+```
+
+```output
+pod/mqtt-client created
+```
+++
+## Start a subscriber
+
+Use `kubectl exec` to start a shell in the mosquitto client pod.
+
+```bash
+kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh
+```
+
+Inside the shell, start a subscriber to the Azure IoT Operations broker on the `tutorial/#` topic space with `mosquitto_sub`.
+
+```bash
+mosquitto_sub --host aio-broker --port 18883 \
+ -t "tutorial/#" \
+ --debug --cafile /var/run/certs/ca.crt \
+ -D CONNECT authentication-method 'K8S-SAT' \
+ -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+```
+
+Leave the command running and open a new terminal window.
+
+## Publish MQTT messages to the cloud via the bridge
+
+In a new terminal window, start another shell in the mosquitto client pod.
+
+```bash
+kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh
+```
+
+Inside the shell, use mosquitto to publish five messages to the `tutorial/local` topic.
+
+```bash
+mosquitto_pub -h aio-broker -p 18883 \
+ -m "This message goes all the way to the cloud and back!" \
+ -t "tutorial/local" \
+ --repeat 5 --repeat-delay 1 -d \
+ --debug --cafile /var/run/certs/ca.crt \
+ -D CONNECT authentication-method 'K8S-SAT' \
+ -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+```
+
+## View the messages in the subscriber
+
+In the subscriber shell, you see the messages you published.
+
+<!-- TODO: add actual mosquitto output -->
+
+Here, you see the messages are published to the local Azure IoT Operations broker to the `tutorial/local` topic, bridged to Event Grid MQTT broker, and then bridged back to the local Azure IoT Operations broker again on the `tutorial/cloud` topic. The messages are then delivered to the subscriber. In this example, the round trip time is about 80 ms.
+
+## Check Event Grid metrics to verify message delivery
+
+You can also check the Event Grid metrics to verify the messages are delivered to the Event Grid MQTT broker. In the Azure portal, go to the Event Grid namespace you created. Under **Metrics** > **MQTT: Successful Published Messages**. You should see the number of messages published and delivered increase as you publish messages to the local Azure IoT Operations broker.
++
+> [!TIP]
+> You can check the configurations of dataflows, QoS, and message routes with the [CLI extension](/cli/azure/iot/ops#az-iot-ops-check-examples) `az iot ops check --detail-level 2`.
+
+## Next steps
+
+In this tutorial, you learned how to configure Azure IoT Operations for bi-directional MQTT bridge with Azure Event Grid MQTT broker. As next steps, explore the following scenarios:
+
+* To use an MQTT client to publish messages directly to the Event Grid MQTT broker, see [Publish MQTT messages to Event Grid MQTT broker](../../event-grid/mqtt-publish-and-subscribe-cli.md). Give the client a [publisher permission binding](../../event-grid/mqtt-access-control.md) to the topic space you created, and you can publish messages to any topic under the `telemetry`, like `telemetry/temperature` or `telemetry/humidity`. All of these messages are bridged to the `tutorial/cloud` topic on the local Azure IoT Operations broker.
+* To set up routing rules for the Event Grid MQTT broker, see [Configure routing rules for Event Grid MQTT broker](../../event-grid/mqtt-routing.md). You can use routing rules to route messages to different topics based on the topic name, or to filter messages based on the message content.
+
+## Related content
+
+* About [BrokerListener resource](../manage-mqtt-broker/howto-configure-brokerlistener.md)
+* [Configure authorization for a BrokerListener](../manage-mqtt-broker/howto-configure-authorization.md)
+* [Configure authentication for a BrokerListener](../manage-mqtt-broker/howto-configure-authentication.md)
+* [Configure TLS with automatic certificate management](../manage-mqtt-broker/howto-configure-tls-auto.md)
iot-operations Concept About State Store Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/concept-about-state-store-protocol.md
To communicate with the state store, clients must meet the following requirement
- Use QoS 1 (Quality of Service level 1). QoS 1 is described in the [MQTT 5 specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901236). - Have a clock that is within one minute of the MQTT broker's clock.
-To communicate with the state store, clients must `PUBLISH` requests to the system topic `statestore/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/command/invoke`. Because the state store is part of Azure IoT Operations, it does an implicit `SUBSCRIBE` to this topic on startup.
+To communicate with the state store, clients must `PUBLISH` requests to the system topic `statestore/v1/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/command/invoke`. Because the state store is part of Azure IoT Operations, it does an implicit `SUBSCRIBE` to this topic on startup.
To build a request, the following MQTT5 properties are required. If these properties aren't present or the request isn't of type QoS 1, the request fails. -- [Response Topic](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Request_/_Response). The state store responds to the initial request using this value. As a best practice, format the response topic as `clients/{clientId}/services/statestore/_any_/command/invoke/response`. Setting the response topic as `statestore/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/command/invoke` or as one that begins with `clients/statestore/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8` is not permitted on a state store request. The state store disconnects MQTT clients that use an invalid response topic.
+- [Response Topic](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Request_/_Response). The state store responds to the initial request using this value. As a best practice, format the response topic as `clients/{clientId}/services/statestore/_any_/command/invoke/response`. Setting the response topic as `statestore/v1/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/command/invoke` or as one that begins with `clients/statestore/v1/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8` is not permitted on a state store request. The state store disconnects MQTT clients that use an invalid response topic.
- [Correlation Data](https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Correlation_Data). When the state store sends a response, it includes the correlation data of the initial request. The following diagram shows an expanded view of the request and response:
The following diagram shows an expanded view of the request and response:
<!-- sequenceDiagram
- Client->>+State Store:Request<BR>PUBLISH statestore/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/command/invoke<BR>Response Topic:client-defined-response-topic<BR>Correlation Data:1234<BR>Payload(RESP3)
+ Client->>+State Store:Request<BR>PUBLISH statestore/v1/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/command/invoke<BR>Response Topic:client-defined-response-topic<BR>Correlation Data:1234<BR>Payload(RESP3)
Note over State Store,Client: State Store Processes Request State Store->>+Client: Response<BR>PUBLISH client-defined-response-topic<br>Correlation Data:1234<BR>Payload(RESP3) -->
When a `SET` request succeeds, the state store returns the following payload:
+OK<CR><LF> ```
+If a SET request fails because a condition check as specified in the NX or NEX set options that means the key cannot be set, the state store returns the following payload:
+
+```console
+-1<CR><LF>
+```
+ #### `GET` response When a `GET` request is made on a nonexistent key, the state store returns the following payload:
The following output is an example of a successful `DEL` command:
:1<CR><LF> ```
+If a VDEL request fails because the value specified does not match the value associated with the key, the state store returns the following payload:
+
+```console
+-1<CR><LF>
+```
+
+#### `-ERR` responses
+
+The following is the current list of error strings. Your client application should handle *unknown error* strings to support updates to the state store.
+
+| Error string returned from state store | Explanation |
+|-|-|
+| the requested timestamp is too far in the future; ensure that the client and broker system clocks are synchronized | Unexpected requested timestamp caused by the state store and client clocks are not in sync. |
+| a fencing token is required for this request | Error occurs if a key is marked with a fencing token, but the client doesn't specify the fencing token. |
+| the requested fencing token timestamp is too far in the future; ensure that the client and broker system clocks are synchronized | Unexpected fencing token timestamp caused by the state store and client clocks are not in sync. |
+| the requested fencing token is a lower version that the fencing token protecting the resource | Incorrect requested fencing token version. For more information, see [Versioning and hybrid logical clocks].(#versioning-and-hybrid-logical-clocks) |
+| the quota has been exceeded | The state store has a quota of how many keys it can store, which is based on the memory profile of the MQTT broker that's specified. |
+| syntax error | The payload sent doesn't conform to state store's definition. |
+| not authorized | Authorization error |
+| unknown command | Command isn't recognized. |
+| wrong number of arguments | Incorrect number of expected arguments. |
+| missing timestamp | When clients do a SET, they must set the MQTT5 user property __ts as an HLC representing its timestamp. |
+| malformed timestamp | The timestamp in the __ts or the fencing token isn't legal. |
+| the key length is zero | Keys can't be zero length in state store. |
+ ## Versioning and hybrid logical clocks This section describes how the state store handles versioning.
Clients can register with the state store to receive notifications of keys being
### KEYNOTIFY request messages
-State store clients request the state store monitor a given `keyName` for changes by sending a `KEYNOTIFY` message. Just like all state store requests, clients PUBLISH a QoS1 message with this message via MQTT5 to the state store system topic `statestore/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/command/invoke`.
+State store clients request the state store monitor a given `keyName` for changes by sending a `KEYNOTIFY` message. Just like all state store requests, clients PUBLISH a QoS1 message with this message via MQTT5 to the state store system topic `statestore/v1/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/command/invoke`.
The request payload has the following form:
When a `keyName` being monitored via `KEYNOTIFY` is modified or deleted, the sta
The topic is defined in the following example. The `clientId` is an upper-case hex encoded representation of the MQTT ClientId of the client that initiated the `KEYNOTIFY` request and `keyName` is a hex encoded representation of the key that changed. ```console
-clients/statestore/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/{clientId}/command/notify/{keyName}
+clients/statestore/v1/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/{clientId}/command/notify/{keyName}
``` As an example, MQ publishes a `NOTIFY` message sent to `client-id1` with the modified key name `SOMEKEY` to the topic: ```console
-clients/statestore/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/636C69656E742D696431/command/notify/534F4D454B4559`
+clients/statestore/v1/FA9AE35F-2F64-47CD-9BFF-08E2B32A0FE8/636C69656E742D696431/command/notify/534F4D454B4559`
``` A client using notifications should `SUBSCRIBE` to this topic and wait for the `SUBACK` to be received *before* sending any `KEYNOTIFY` requests so that no messages are lost.
The following details are included in the message:
* `SET` the value was modified. This operation can only occur as the result of a `SET` command from a state store client. * `DEL` the value was deleted. This operation can occur because of a `DEL` or `VDEL` command from a state store client. * `optionalFields`
- * `VALUE` and `{MODIFIED-VALUE}`. `VALUE` is a string literal indicating that the next field, `{MODIFIED-VALUE}`, contains the value the key was changed to. This value is only sent in response to keys being modified because of a `SET` and is only included if the `KEYNOTIFY` request included the optional `GET` flag.
+ * `VALUE` and `{MODIFIED-VALUE}`. `VALUE` is a string literal indicating that the next field, `{MODIFIED-VALUE}`, contains the value the key was changed to. This value is only sent in response to keys being modified because of a `SET`.
The following example output shows a notification message sent when the key `SOMEKEY` is modified to the value `abc`, with the `VALUE` included because the initial request specified the `GET` option:
iot-operations Howto Deploy Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-deploy-dapr.md
To create the yaml file, use the following component definitions:
> | `metadata:annotations:dapr.io/component-container` | Component annotations used by Dapr sidecar injector, defining the image location, volume mounts and logging configuration | > | `spec:type` | [The type of the component](https://docs.dapr.io/operations/components/pluggable-components-registration/#define-the-component), which needs to be declared exactly as shown | > | `spec:metadata:keyPrefix` | Defines the key prefix used when communicating to the statestore backend. See more information, see [Dapr documentation](https://docs.dapr.io/developing-applications/building-blocks/state-management/howto-share-state) for more information |
-> | `spec:metadata:hostname` | The MQTT broker hostname. Default is `aio-mq-dmqtt-frontend` |
-> | `spec:metadata:tcpPort` | The MQTT broker port number. Default is `8883` |
+> | `spec:metadata:hostname` | The MQTT broker hostname. Default is `aio-broker` |
+> | `spec:metadata:tcpPort` | The MQTT broker port number. Default is `18883` |
> | `spec:metadata:useTls` | Define if TLS is used by the MQTT broker. Default is `true` | > | `spec:metadata:caFile` | The certificate chain path for validating the MQTT broker. Required if `useTls` is `true`. This file must be mounted in the pod with the specified volume name | > | `spec:metadata:satAuthFile ` | The Service Account Token (SAT) file is used to authenticate the Dapr components with the MQTT broker. This file must be mounted in the pod with the specified volume name |
To create the yaml file, use the following component definitions:
"image": "ghcr.io/azure/iot-operations-dapr-components:latest", "volumeMounts": [ { "name": "mqtt-client-token", "mountPath": "/var/run/secrets/tokens" },
- { "name": "aio-ca-trust-bundle", "mountPath": "/var/run/certs/aio-mq-ca-cert" }
+ { "name": "aio-ca-trust-bundle", "mountPath": "/var/run/certs/aio-internal-ca-cert" }
], "env": [ { "name": "pubSubLogLevel", "value": "Information" },
To create the yaml file, use the following component definitions:
version: v1 metadata: - name: hostname
- value: aio-mq-dmqtt-frontend
+ value: aio-broker
- name: tcpPort
- value: 8883
+ value: 18883
- name: useTls value: true - name: caFile
- value: /var/run/certs/aio-mq-ca-cert/ca.crt
+ value: /var/run/certs/aio-internal-ca-cert/ca.crt
- name: satAuthFile value: /var/run/secrets/tokens/mqtt-client-token
To create the yaml file, use the following component definitions:
version: v1 metadata: - name: hostname
- value: aio-mq-dmqtt-frontend
+ value: aio-broker
- name: tcpPort
- value: 8883
+ value: 18883
- name: useTls value: true - name: caFile
- value: /var/run/certs/aio-mq-ca-cert/ca.crt
+ value: /var/run/certs/aio-internal-ca-cert/ca.crt
- name: satAuthFile value: /var/run/secrets/tokens/mqtt-client-token ```
iot-operations Howto Develop Dapr Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-develop-dapr-apps.md
The following definition components might require customization to your specific
name: dapr-client namespace: azure-iot-operations annotations:
- aio-mq-broker-auth/group: dapr-workload
+ aio-broker-auth/group: dapr-workload
apiVersion: apps/v1 kind: Deployment
The following definition components might require customization to your specific
sources: - serviceAccountToken: path: mqtt-client-token
- audience: aio-mq
+ audience: aio-internal
expirationSeconds: 86400 # Certificate chain for Dapr to validate the MQTT broker
iot-operations Howto Develop Mqttnet Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-develop-mqttnet-apps.md
spec:
sources: - serviceAccountToken: path: mqtt-client-token
- audience: aio-mq
+ audience: aio-internal
expirationSeconds: 86400 # Certificate chain for the application to validate the MQTT broker
spec:
- name: mqtt-client-token mountPath: /var/run/secrets/tokens/ - name: aio-ca-trust-bundle
- mountPath: /var/run/certs/aio-mq-ca-cert/
+ mountPath: /var/run/certs/aio-internal-ca-cert/
env: - name: hostname
- value: "aio-mq-dmqtt-frontend"
+ value: "aio-broker"
- name: tcpPort
- value: "8883"
+ value: "18883"
- name: useTls value: "true" - name: caFile
- value: "/var/run/certs/aio-mq-ca-cert/ca.crt"
+ value: "/var/run/certs/aio-internal-ca-cert/ca.crt"
- name: satAuthFile value: "/var/run/secrets/tokens/mqtt-client-token" ```
iot-operations Tutorial Event Driven With Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/tutorial-event-driven-with-dapr.md
To start, create a yaml file that uses the following definitions:
| Component | Description | |-|-| | `volumes.mqtt-client-token` | The SAT used for authenticating the Dapr pluggable components with the MQTT broker and State Store |
-| `volumes.aio-mq-ca-cert-chain` | The chain of trust to validate the MQTT broker TLS cert |
+| `volumes.aio-internal-ca-cert-chain` | The chain of trust to validate the MQTT broker TLS cert |
| `containers.mq-event-driven` | The prebuilt Dapr application container. | 1. Save the following deployment yaml to a file named `app.yaml`:
To start, create a yaml file that uses the following definitions:
name: dapr-client namespace: azure-iot-operations annotations:
- aio-mq-broker-auth/group: dapr-workload
+ aio-broker-auth/group: dapr-workload
apiVersion: apps/v1 kind: Deployment
To start, create a yaml file that uses the following definitions:
sources: - serviceAccountToken: path: mqtt-client-token
- audience: aio-mq
+ audience: aio-internal
expirationSeconds: 86400 # Certificate chain for Dapr to validate the MQTT broker
To verify the MQTT bridge is working, deploy an MQTT client to the cluster.
- name: mqtt-client-token mountPath: /var/run/secrets/tokens - name: aio-ca-trust-bundle
- mountPath: /var/run/certs/aio-mq-ca-cert/
+ mountPath: /var/run/certs/aio-internal-ca-cert/
volumes: - name: mqtt-client-token projected: sources: - serviceAccountToken: path: mqtt-client-token
- audience: aio-mq
+ audience: aio-internal
expirationSeconds: 86400 - name: aio-ca-trust-bundle configMap:
To verify the MQTT bridge is working, deploy an MQTT client to the cluster.
1. Subscribe to the `sensor/window_data` topic to observe the published output from the Dapr application: ```bash
- mosquitto_sub -L mqtt://aio-mq-dmqtt-frontend/sensor/window_data
+ mosquitto_sub -L mqtt://aio-broker/sensor/window_data
``` 1. Verify the application is outputting a sliding windows calculation for the various sensors every 10 seconds:
iot-operations Concept Default Root Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/concept-default-root-ca.md
+
+ Title: Certificate management for Azure IoT Operations Preview internal communication
+description: Azure IoT Operations Preview uses TLS to encrypt communication. Learn about the default setup and also how to bring your own CA for production.
++++ Last updated : 10/01/2024+
+#CustomerIntent: As an operator, I want to configure Azure IoT Operations components to use TLS so that I have secure communication between all components.
++
+# Certificate management for Azure IoT Operations Preview internal communication
+
+All communication within Azure IoT Operations Preview is encrypted using TLS. To help you get started, Azure IoT Operations is deployed with a default root CA and issuer for TLS server certificates. You can use the default setup for development and testing purposes. For a production deployment, we recommend using your own CA issuer and an enterprise PKI solution.
+
+## Default root CA and issuer for TLS server certificates
+
+To help you get started, Azure IoT Operations Preview is deployed with a default root CA and issuer for TLS server certificates. You can use this issuer for development and testing. Azure IoT Operations uses [cert-manager](https://cert-manager.io/docs/) to manage TLS certificates, and [trust-manager](https://cert-manager.io/docs/trust/) to distribute trust bundles to components.
+
+* The CA certificate is self-signed and not trusted by any clients outside of Azure IoT Operations. The subject of the CA certificate is `CN=Azure IoT Operations Quickstart Root CA - Not for Production`. The CA certificate is automatically rotated by cert-manager.
+
+* The root CA certificate is stored in a Kubernetes secret called `azure-iot-operations-aio-ca-certificate` under the `cert-manager` namespace.
+
+* The public portion of the root CA certificate is stored in a ConfigMap called `azure-iot-operations-aio-ca-trust-bundle` under the `azure-iot-operations` namespace. You can retrieve the CA certificate from the ConfigMap and inspect it with kubectl and openssl. The ConfigMap is kept updated by trust-manager when the CA certificate is rotated by cert-manager.
+
+ ```bash
+ kubectl get configmap azure-iot-operations-aio-ca-trust-bundle -n azure-iot-operations -o "jsonpath={.data['ca\.crt']}" | openssl x509 -text -noout
+ ```
+
+ ```Output
+ Certificate:
+ Data:
+ Version: 3 (0x2)
+ Serial Number:
+ <SERIAL-NUMBER>
+ Signature Algorithm: sha256WithRSAEncryption
+ Issuer: O=Microsoft, CN=Azure IoT Operations Quickstart Root CA - Not for Production
+ Validity
+ Not Before: Sep 18 20:42:19 2024 GMT
+ Not After : Sep 18 20:42:19 2025 GMT
+ Subject: O=Microsoft, CN=Azure IoT Operations Quickstart Root CA - Not for Production
+ Subject Public Key Info:
+ Public Key Algorithm: rsaEncryption
+ Public-Key: (2048 bit)
+ Modulus: <MODULUS>
+ Exponent: 65537 (0x10001)
+ X509v3 extensions:
+ X509v3 Key Usage: critical
+ Certificate Sign, CRL Sign
+ X509v3 Basic Constraints: critical
+ CA:TRUE
+ X509v3 Subject Key Identifier:
+ <SUBJECT-KEY-IDENTIFIER>
+ Signature Algorithm: sha256WithRSAEncryption
+ [Signature]
+ ```
+
+* By default, there's already a CA issuer configured in the `azure-iot-operations namespace` called `azure-iot-operations-aio-certificate-issuer`. It's used as the common CA issuer for all TLS server certificates for IoT Operations. MQTT broker uses an issuer created from the same CA certificate to issue TLS server certificates for the default TLS listener on port 18883. You can inspect the issuer with the following command:
+
+ ```bash
+ kubectl get clusterissuer azure-iot-operations-aio-certificate-issuer -o yaml
+ ```
+
+ ```Output
+ apiVersion: cert-manager.io/v1
+ kind: ClusterIssuer
+ metadata:
+ creationTimestamp: "2024-09-18T20:42:17Z"
+ generation: 1
+ name: azure-iot-operations-aio-certificate-issuer
+ resourceVersion: "36665"
+ uid: 592700a6-95e0-4788-99e4-ea93934bd330
+ spec:
+ ca:
+ secretName: azure-iot-operations-aio-ca-certificate
+ status:
+ conditions:
+ - lastTransitionTime: "2024-09-18T20:42:22Z"
+ message: Signing CA verified
+ observedGeneration: 1
+ reason: KeyPairVerified
+ status: "True"
+ type: Ready
+ ```
iot-operations Howto Deploy Iot Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md
Title: Deploy Azure IoT Operations to a cluster
-description: Use the Azure CLI to deploy Azure IoT Operations to an Arc-enabled Kubernetes cluster.
+description: Use the Azure CLI or Azure portal to deploy Azure IoT Operations to an Arc-enabled Kubernetes cluster.
Previously updated : 08/02/2024 Last updated : 10/02/2024 #CustomerIntent: As an OT professional, I want to deploy Azure IoT Operations to a Kubernetes cluster.
Last updated 08/02/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-Deploy Azure IoT Operations Preview to a Kubernetes cluster using the Azure CLI. Once you have Azure IoT Operations deployed, then you can manage and deploy other workloads to your cluster.
+Learn how to deploy Azure IoT Operations Preview to a Kubernetes cluster using the Azure CLI or Azure portal.
+
+In this article, we discuss Azure IoT Operations *deployments* and *instances*, which are two different concepts:
* An Azure IoT Operations *deployment* describes all of the components and resources that enable the Azure IoT Operations scenario. These components and resources include: * An Azure IoT Operations instance * Arc extensions * Custom locations * Resource sync rules
- * Resources that you can configure in your Azure IoT Operations solution, like assets, MQTT broker, and dataflows.
+ * Resources that you can configure in your Azure IoT Operations solution, like assets and asset endpoints.
-* An Azure IoT Operations *instance* is one part of a deployment. It's the parent resource that bundles the suite of services that are defined in [What is Azure IoT Operations Preview?](../overview-iot-operations.md), like MQ, Akri, and OPC UA connector.
+* An Azure IoT Operations *instance* is the parent resource that bundles the suite of services that are defined in [What is Azure IoT Operations Preview?](../overview-iot-operations.md) like MQTT broker, dataflows, and OPC UA connector.
-In this article, when we talk about deploying Azure IoT Operations we mean the full set of components that make up a *deployment*. Once the deployment exists, you can view, manage, and update the *instance*.
+When we talk about deploying Azure IoT Operations, we mean the full set of components that make up a *deployment*. Once the deployment exists, you can view, manage, and update the *instance*.
## Prerequisites
Cloud resources:
* An Azure subscription.
-* Azure access permissions. At a minimum, have **Contributor** permissions in your Azure subscription. Depending on the deployment feature flag status you select, you might also need **Microsoft/Authorization/roleAssignments/write** permissions for the resource group that contains your Arc-enabled Kubernetes cluster. You can make a custom role in Azure role-based access control or assign a built-in role that grants this permission. For more information, see [Azure built-in roles for General](../../role-based-access-control/built-in-roles/general.md).
-
- If you *don't* have role assignment write permissions, you can still deploy Azure IoT Operations by disabling some features. This approach is discussed in more detail in the [Deploy](#deploy) section of this article.
-
- * In the Azure CLI, use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to give permissions. For example, `az role assignment create --assignee sp_name --role "Role Based Access Control Administrator" --scope subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyResourceGroup`
-
- * In the Azure portal, when you assign privileged admin roles to a user or principal, you can restrict access using conditions. For this scenario, select the **Allow user to assign all roles** condition in the **Add role assignment** page.
-
- :::image type="content" source="./media/howto-deploy-iot-operations/add-role-assignment-conditions.png" alt-text="Screenshot that shows assigning users highly privileged role access in the Azure portal.":::
-
-* An Azure key vault that has the **Permission model** set to **Vault access policy**. You can check this setting in the **Access configuration** section of an existing key vault. To create a new key vault, use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command:
+* An Azure key vault. To create a new key vault, use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command:
```azurecli
- az keyvault create --enable-rbac-authorization false --name "<KEYVAULT_NAME>" --resource-group "<RESOURCE_GROUP>"
+ az keyvault create --enable-rbac-authorization --name "<NEW_KEYVAULT_NAME>" --resource-group "<RESOURCE_GROUP>"
```
+* Azure access permissions. For more information, see [Deployment details > Required permissions](overview-deploy.md#required-permissions).
+ Development resources:
-* Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli). This scenario requires Azure CLI version 2.53.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary.
+* Azure CLI installed on your development machine. This scenario requires Azure CLI version 2.64.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
Development resources:
A cluster host:
-* An Azure Arc-enabled Kubernetes cluster. If you don't have one, follow the steps in [Prepare your Azure Arc-enabled Kubernetes cluster](./howto-prepare-cluster.md?tabs=wsl-ubuntu).
+* An Azure Arc-enabled Kubernetes cluster with the custom location and workload identity features enabled. If you don't have one, follow the steps in [Prepare your Azure Arc-enabled Kubernetes cluster](./howto-prepare-cluster.md).
- If you deployed Azure IoT Operations to your cluster previously, uninstall those resources before continuing. For more information, see [Update Azure IoT Operations](#update-azure-iot-operations).
+ If you deployed Azure IoT Operations to your cluster previously, uninstall those resources before continuing. For more information, see [Update Azure IoT Operations](./howto-manage-update-uninstall.md#update).
- Azure IoT Operations should work on any CNCF-conformant kubernetes cluster. Currently, Microsoft only supports K3s on Ubuntu Linux and WSL, or AKS Edge Essentials on Windows.
-
- Use the Azure IoT Operations extension for Azure CLI to verify that your cluster host is configured correctly for deployment by using the [verify-host](/cli/azure/iot/ops#az-iot-ops-verify-host) command on the cluster host:
+* Verify that your cluster host is configured correctly for deployment by using the [verify-host](/cli/azure/iot/ops#az-iot-ops-verify-host) command on the cluster host:
```azurecli az iot ops verify-host ```
+* (Optional) Prepare your cluster for observability before deploying Azure IoT Operations: [Configure observability](../configure-observability-monitoring/howto-configure-observability.md).
+ ## Deploy Use the Azure portal or Azure CLI to deploy Azure IoT Operations to your Arc-enabled Kubernetes cluster. The Azure portal deployment experience is a helper tool that generates a deployment command based on your resources and configuration. The final step is to run an Azure CLI command, so you still need the Azure CLI prerequisites described in the previous section.
-### [Azure CLI](#tab/cli)
-
-1. Sign in to Azure CLI interactively with a browser even if you already signed in before. If you don't sign in interactively, you might get an error that says *Your device is required to be managed to access your resource* when you continue to the next step to deploy Azure IoT Operations.
-
- ```azurecli
- az login
- ```
-
- > [!NOTE]
- > If you're using GitHub Codespaces in a browser, `az login` returns a localhost error in the browser window after logging in. To fix, either:
- >
- > * Open the codespace in VS Code desktop, and then run `az login` in the terminal. This opens a browser window where you can log in to Azure.
- > * Or, after you get the localhost error on the browser, copy the URL from the browser and use `curl <URL>` in a new terminal tab. You should see a JSON response with the message "You have logged into Microsoft Azure!".
-
-1. Deploy Azure IoT Operations to your cluster. Use optional flags to customize the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command to fit your scenario.
-
- By default, the `az iot ops init` command takes the following actions, some of which require that the principal signed in to the CLI has elevated permissions:
-
- * Set up a service principal and app registration to give your cluster access to the key vault.
- * Configure TLS certificates.
- * Configure a secrets store on your cluster that connects to the key vault.
- * Deploy the Azure IoT Operations resources.
-
- ```azurecli
- az iot ops init --cluster <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --kv-id <KEYVAULT_SETTINGS_PROPERTIES_RESOURCE_ID>
- ```
-
- Use the [optional parameters](/cli/azure/iot/ops#az-iot-ops-init-optional-parameters) to customize your deployment, including:
-
- | Parameter | Value | Description |
- | | -- | -- |
- | `--add-insecure-listener` | | Add an insecure 1883 port config to the default listener. *Not for production use*. |
- | `--broker-config-file` | Path to JSON file | Provide a configuration file for the MQTT broker. For more information, see [Advanced MQTT broker config](https://github.com/Azure/azure-iot-ops-cli-extension/wiki/Advanced-Mqtt-Broker-Config) and [Configure core MQTT broker settings](../manage-mqtt-broker/howto-configure-availability-scale.md). |
- | `--disable-rsync-rules` | | Disable the resource sync rules on the deployment feature flag if you don't have **Microsoft.Authorization/roleAssignment/write** permissions in the resource group. |
- | `--name` | String | Provide a name for your Azure IoT Operations instance. Otherwise, a default name is assigned. You can view the `instanceName` parameter in the command output. |
- | `--no-progress` | | Disables the deployment progress display in the terminal. |
- | `--simulate-plc` | | Include the OPC PLC simulator that ships with the OPC UA connector. |
- | `--sp-app-id`,<br>`--sp-object-id`,<br>`--sp-secret` | Service principal app ID, service principal object ID, and service principal secret | Include all or some of these parameters to use an existing service principal, app registration, and secret instead of allowing `init` to create new ones. For more information, see [Configure service principal and Key Vault manually](howto-manage-secrets.md#configure-service-principal-and-key-vault-manually). |
- ### [Azure portal](#tab/portal) 1. In the [Azure portal](https://portal.azure.com), search for and select **Azure IoT Operations**.
The Azure portal deployment experience is a helper tool that generates a deploym
| Parameter | Value | | | -- | | **Azure IoT Operations name** | *Optional*: Replace the default name for the Azure IoT Operations instance. |
- | **MQTT broker configuration** | *Optional*: Replace the default settings for the MQTT broker. For more information, see [Configure core MQTT broker settings](../manage-mqtt-broker/howto-configure-availability-scale.md). |
+ | **MQTT broker configuration** | *Optional*: Edit the default settings for the MQTT broker. For more information, see [Configure core MQTT broker settings](../manage-mqtt-broker/howto-configure-availability-scale.md). |
+ | **Dataflow profile configuration** | *Optional*: Edit the default settings for dataflows. For more information, see [Configure dataflow profile](../connect-to-cloud/howto-configure-dataflow-profile.md). |
-1. Select **Next: Automation**.
-
-1. On the **Automation** tab, provide the following information:
+ :::image type="content" source="./media/howto-deploy-iot-operations/deploy-configuration.png" alt-text="A screenshot that shows the second tab for deploying Azure IoT Operations from the portal.":::
- | Parameter | Value |
- | | -- |
- | **Subscription** | Select the subscription that contains your Azure key vault. |
- | **Azure Key Vault** | Select your Azure key vault. Or, select **Create new**.<br><br>Ensure that your key vault has **Vault access policy** as its permission model. To check this setting, select **Manage selected vault** > **Settings** > **Access configuration**. |
+1. Select **Next: Dependency management**.
- :::image type="content" source="./media/howto-deploy-iot-operations/deploy-automation.png" alt-text="A screenshot that shows the third tab for deploying Azure IoT Operations from the portal.":::
+1. On the **Dependency management** tab, select an existing schema registry or use these steps to create one:
-1. If you didn't prepare your Azure CLI environment as described in the prerequisites, do so now in a terminal of your choice:
+ 1. Select **Create new**.
- ```azurecli
- az upgrade
- az extension add --upgrade --name azure-iot-ops
- ```
+ 1. Provide a **Schema registry name** and **Schema registry namespace**.
-1. Sign in to Azure CLI interactively with a browser even if you already signed in before. If you don't sign in interactively, you might get an error that says *Your device is required to be managed to access your resource* when you continue to the next step to deploy Azure IoT Operations.
+ 1. Select **Select Azure Storage container**.
- ```azurecli
- az login
- ```
+ 1. Choose a storage account from the list of hierarchical namespace-enabled accounts, or select **Create** to create one.
-1. Copy the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command from the **Automation** tab in the Azure portal and run it in your terminal.
+ Schema registry requires an Azure Storage account with hierarchical namespace and public network access enabled. When creating a new storage account, choose a **General purpose v2** storage account type and set **Hierarchical namespace** to **Enabled**.
-
+ 1. Select a container in your storage account or select **Container** to create one.
-While the deployment is in progress, you can watch the resources being applied to your cluster.
+ 1. Select **Apply** to confirm the schema registry configurations.
-* If your terminal supports it, `init` displays the deployment progress.
+1. On the **Dependency management** tab, select the **Secure settings** deployment option.
- :::image type="content" source="./media/howto-deploy-iot-operations/view-deployment-terminal.png" alt-text="A screenshot that shows the progress of an Azure IoT Operations deployment in a terminal.":::
+ :::image type="content" source="./media/howto-deploy-iot-operations/deploy-dependency-management-1.png" alt-text="A screenshot that shows selecting secure settings on the third tab for deploying Azure IoT Operations from the portal.":::
- Once the **Deploy IoT Operations** phase begins, the text in the terminal becomes a link to view the deployment progress in the Azure portal.
+1. In the **Deployment options** section, provide the following information:
- :::image type="content" source="./media/howto-deploy-iot-operations/view-deployment-portal.png" alt-text="A screenshot that shows the progress of an Azure IoT Operations deployment in the Azure portal." lightbox="./media/howto-deploy-iot-operations/view-deployment-portal.png":::
+ | Parameter | Value |
+ | | -- |
+ | **Subscription** | Select the subscription that contains your Azure key vault. |
+ | **Azure Key Vault** | Select an Azure key vault select **Create new**.<br><br>Ensure that your key vault has **Vault access policy** as its permission model. To check this setting, select **Manage selected vault** > **Settings** > **Access configuration**. |
+ | **User assigned managed identity for secrets** | Select an identity or select **Create new**. |
+ | **User assigned managed identity for AIO components** | Select an identity or select **Create new**. Don't use the same managed identity as the one you selected for secrets. |
-* Otherwise, or if you choose to disable the progress interface with `--no-progress` added to the `init` command, you can use kubectl commands to view the pods on your cluster:
+ :::image type="content" source="./media/howto-deploy-iot-operations/deploy-dependency-management-2.png" alt-text="A screenshot that shows configuring secure settings on the third tab for deploying Azure IoT Operations from the portal.":::
- ```bash
- kubectl get pods -n azure-iot-operations
- ```
+1. Select **Next: Automation**.
- It can take several minutes for the deployment to complete. Rerun the `get pods` command to refresh your view.
+1. One at a time, run each Azure CLI command on the **Automation** tab in a terminal:
-After the deployment is complete, use [az iot ops check](/cli/azure/iot/ops#az-iot-ops-check) to evaluate IoT Operations service deployment for health, configuration, and usability. The *check* command can help you find problems in your deployment and configuration.
+ 1. Sign in to Azure CLI interactively with a browser even if you already signed in before. If you don't sign in interactively, you might get an error that says *Your device is required to be managed to access your resource* when you continue to the next step to deploy Azure IoT Operations.
-```azurecli
-az iot ops check
-```
+ ```azurecli
+ az login
+ ```
-You can also check the configurations of topic maps, QoS, and message routes by adding the `--detail-level 2` parameter for a verbose view.
+ 1. If you didn't prepare your Azure CLI environment as described in the prerequisites, do so now in a terminal of your choice:
-## Manage Azure IoT Operations
+ ```azurecli
+ az upgrade
+ az extension add --upgrade --name azure-iot-ops
+ ```
-After deployment, you can use the Azure CLI and Azure portal to view and manage your Azure IoT Operations instance.
+ 1. If you chose to create a new schema registry on the previous tab, copy and run the `az iot ops schema registry create` command.
-### List instances
+ 1. Prepare your cluster for Azure IoT Operations deployment by deploying dependencies and foundational services, including schema registry. Copy and run the `az iot ops init` command.
-#### [Azure CLI](#tab/cli)
+ >[!TIP]
+ >The `init` command only needs to be run once per cluster. If you're reusing a cluster that already had Azure IoT Operations version 0.7.0 deployed on it, you can skip this step.
-Use the `az iot ops list` command to see all of the Azure IoT Operations instances in your subscription or resource group.
+ This command might take several minutes to complete. You can watch the progress in the deployment progress display in the terminal.
-The basic command returns all instances in your subscription.
+ 1. Deploy Azure IoT Operations to your cluster. Copy and run the `az iot ops create` command.
-```azurecli
-az iot ops list
-```
+ This command might take several minutes to complete. You can watch the progress in the deployment progress display in the terminal.
-To filter the results by resource group, add the `--resource-group` parameter.
+ 1. Enable secret sync on your Azure IoT Operations instance. Copy and run the `az iot ops secretsync enable` command. This command:
-```azurecli
-az iot ops list --resource-group <RESOURCE_GROUP>
-```
+ * Creates a federated identity credential using the user-assigned managed identity.
+ * Adds a role assignment to the user-assigned managed identity for access to the Azure Key Vault.
+ * Adds a minimum secret provider class associated with the Azure IoT Operations instance.
-#### [Azure portal](#tab/portal)
+ 1. Assign a user-assigned managed identity to your Azure IoT Operations instance. Copy and run the `az iot ops identity assign` command.
+
+ This command also creates a federated identity credential using the OIDC issuer of the indicated connected cluster and the Azure IoT Operations service account.
-1. In the [Azure portal](https://portal.azure.com), search for and select **Azure IoT Operations**.
-1. Use the filters to view Azure IoT Operations instances based on subscription, resource group, and more.
+1. Once all of the Azure CLI commands complete successfully, you can close the **Install Azure IoT Operations** wizard.
-
+### [Azure CLI](#tab/cli)
-### View instance
+1. Sign in to Azure CLI interactively with a browser even if you already signed in before.
-#### [Azure CLI](#tab/cli)
+ ```azurecli
+ az login
+ ```
-Use the `az iot ops show` command to view the properties of an instance.
+ If at any point you get an error that says *Your device is required to be managed to access your resource*, run `az login` again and make sure that you sign in interactively with a browser.
-```azurecli
-az iot ops show --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP>
-```
+### Create a storage account and schema registry
-You can also use the `az iot ops show` command to view the resources in your Azure IoT Operations deployment in the Azure CLI. Add the `--tree` flag to show a tree view of the deployment that includes the specified Azure IoT Operations instance.
+Azure IoT Operations requires a schema registry on your cluster. Schema registry requires an Azure storage account so that it can synchronize schema information between cloud and edge.
-```azurecli
-az iot ops show --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP> --tree
-```
+1. Create a storage account with hierarchical namespace enabled.
-The tree view of a deployment looks like the following example:
-
-```bash
-MyCluster
-Γö£ΓöÇΓöÇ extensions
-Γöé Γö£ΓöÇΓöÇ akvsecretsprovider
-Γöé Γö£ΓöÇΓöÇ azure-iot-operations-ltwgs
-Γöé ΓööΓöÇΓöÇ azure-iot-operations-platform-ltwgs
-ΓööΓöÇΓöÇ customLocations
- ΓööΓöÇΓöÇ MyCluster-cl
- Γö£ΓöÇΓöÇ resourceSyncRules
- ΓööΓöÇΓöÇ resources
- Γö£ΓöÇΓöÇ MyCluster-ops-init-instance
- ΓööΓöÇΓöÇ MyCluster-observability
-```
+ ```azurecli
+ az storage account create --name <NEW_STORAGE_ACCOUNT_NAME> --resource-group <RESOURCE_GROUP> --enable-hierarchical-namespace
+ ```
-You can run `az iot ops check` on your cluster to assess health and configurations of individual Azure IoT Operations components. By default, the command checks MQ but you can [specify the service](/cli/azure/iot/ops#az-iot-ops-check-examples) with `--ops-service` parameter.
+1. Create a schema registry that connects to your storage account.
-#### [Azure portal](#tab/portal)
+ ```azurecli
+ az iot ops schema registry create --name <NEW_SCHEMA_REGISTRY_NAME> --resource-group <RESOURCE_GROUP> --registry-namespace <NEW_SCHEMA_REGISTRY_NAMESPACE> --sa-resource-id $(az storage account show --name <STORAGE_ACCOUNT_NAME> --resource-group <RESOURCE_GROUP> -o tsv --query id)
+ ```
-You can view your Azure IoT Operations instance in the Azure portal.
+ >[!NOTE]
+ >This command requires that you have role assignment write permissions because it assigns a role to give schema registry access to the storage account. By default, the role is the built-in **Storage Blob Data Contributor** role, or you can create a custom role with restricted permissions to assign instead.
-1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your Azure IoT Operations instance, or search for and select **Azure IoT Operations**.
+ Use the optional parameters to customize your schema registry, including:
-1. Select the name of your Azure IoT Operations instance.
+ | Optional parameter | Value | Description |
+ | | -- | -- |
+ | `--custom-role-id` | Role definition ID | Provide a custom role ID to assign to the schema registry instead of the default **Storage Blob Data Contributor** role. At a minimum, the role needs blob read and write permissions. Format: `/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Authorization/roleDefinitions/<ROLE_ID>`. |
+ | `--sa-container` | string | Storage account container to store schemas. If this container doesn't exist, this command creates it. The default container name is **schemas**. |
-1. On the **Overview** page of your instance, the **Arc extensions** table displays the resources that were deployed to your cluster.
+1. Copy the resource ID from the output of the schema registry create command to use in the next section.
- :::image type="content" source="../get-started-end-to-end-sample/media/quickstart-deploy/view-instance.png" alt-text="Screenshot that shows the Azure IoT Operations instance on your Arc-enabled cluster." lightbox="../get-started-end-to-end-sample/media/quickstart-deploy/view-instance.png":::
+### Deploy Azure IoT Operations
-
+1. Prepare your cluster with the dependencies that Azure IoT Operations requires by running [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
-### Update instance tags and description
+ >[!TIP]
+ >The `init` command only needs to be run once per cluster. If you're reusing a cluster that already had Azure IoT Operations version 0.7.0 deployed on it, you can skip this step.
-#### [Azure CLI](#tab/cli)
+ ```azurecli
+ az iot ops init --cluster <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --sr-resource-id <SCHEMA_REGISTRY_RESOURCE_ID>
+ ```
-Use the `az iot ops update` command to edit the tags and description parameters of your Azure IoT Operations instance. The values provided in the `update` command replace any existing tags or description
+ This command might take several minutes to complete. You can watch the progress in the deployment progress display in the terminal.
-```azurecli
-az iot ops update --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP> --desc "<INSTANCE_DESCRIPTION>" --tags <TAG_NAME>=<TAG-VALUE> <TAG_NAME>=<TAG-VALUE>
-```
+ Use the [optional parameters](/cli/azure/iot/ops#az-iot-ops-init-optional-parameters) to customize your cluster, including:
-To delete all tags on an instance, set the tags parameter to a null value. For example:
+ | Optional parameter | Value | Description |
+ | | -- | -- |
+ | `--no-progress` | | Disable the deployment progress display in the terminal. |
+ | `--enable-fault-tolerance` | `false`, `true` | Enable fault tolerance for Azure Arc Container Storage. At least three cluster nodes are required. |
+ | `--ops-config` | `observability.metrics.openTelemetryCollectorAddress=<FULLNAMEOVERRIDE>.azure-iot-operations.svc.cluster.local:<GRPC_ENDPOINT>` | If you followed the optional prerequisites to prepare your cluster for observability, provide the OpenTelemetry (OTel) collector address you configured in the otel-collector-values.yaml file.<br><br>The sample values used in [Configure observability](../configure-observability-monitoring/howto-configure-observability.md) are **fullnameOverride=aio-otel-collector** and **grpc.enpoint=4317**. |
+ | `--ops-config` | `observability.metrics.exportInternalSeconds=<CHECK_INTERVAL>` | If you followed the optional prerequisites to prepare your cluster for observability, provide the **check_interval** value you configured in the otel-collector-values.yaml file.<br><br>The sample value used in [Configure observability](../configure-observability-monitoring/howto-configure-observability.md) is **check_interval=60**. |
-```azurecli
-az iot ops update --name <INSTANCE_NAME> --resource-group --tags ""
-```
+1. Deploy Azure IoT Operations. This command takes several minutes to complete:
-#### [Azure portal](#tab/portal)
+ ```azurecli
+ az iot ops create --name <NEW_INSTANCE_NAME> --cluster <CLUSTER_NAME> --resource-group <RESOURCE_GROUP>
+ ```
-1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your Azure IoT Operations instance, or search for and select **Azure IoT Operations**.
+ This command might take several minutes to complete. You can watch the progress in the deployment progress display in the terminal.
-1. Select the name of your Azure IoT Operations instance.
+ Use the optional parameters to customize your instance, including:
-1. On the **Overview** page of your instance, select **Add tags** or **edit** to modify tags on your instance.
+ | Optional parameter | Value | Description |
+ | | -- | -- |
+ | `--no-progress` | | Disable the deployment progress display in the terminal. |
+ | `--enable-rsync-rules` | | Enable the resource sync rules on the instance to project resources from the edge to the cloud. |
+ | `--add-insecure-listener` | | Add an insecure 1883 port config to the default listener. *Not for production use*. |
+ | `--custom-location` | String | Provide a name for the custom location created for your cluster. The default value is **location-{hash(5)}**. |
+ | `--broker-config-file` | Path to JSON file | Provide a configuration file for the MQTT broker. For more information, see [Advanced MQTT broker config](https://github.com/Azure/azure-iot-ops-cli-extension/wiki/Advanced-Mqtt-Broker-Config) and [Configure core MQTT broker settings](../manage-mqtt-broker/howto-configure-availability-scale.md). |
-
+Once the `create` command completes successfully, you have a working Azure IoT Operations instance running on your cluster. At this point, your instance is configured for most testing and evaluation scenarios. If you want to prepare your instance for production scenarios, continue to the next section to enable secure settings.
-## Uninstall Azure IoT Operations
+### Set up secret management and user assigned managed identity (optional)
-The Azure CLI and Azure portal offer different options for uninstalling Azure IoT Operations.
+Secret management for Azure IoT Operations uses Azure Secret Store to sync the secrets from an Azure Key Vault and store them on the edge as Kubernetes secrets.
-If you want to delete an entire Azure IoT Operations deployment, use the Azure CLI.
+Azure secret requires a user-assigned managed identity with access to the Azure Key Vault where secrets are stored. Dataflows also requires a user-assigned managed identity to authenticate cloud connections.
-If you want to delete an Azure IoT Operations instance but keep the related resources in the deployment, use the Azure portal.
+1. Create an Azure Key Vault if you don't have one available. Use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command.
-### [Azure CLI](#tab/cli)
+ ```azurecli
+ az keyvault create --resource-group "<RESOURCE_GROUP>" --location "<LOCATION>" --name "<KEYVAULT_NAME>" --enable-rbac-authorization
+ ```
-Use the [az iot ops delete](/cli/azure/iot/ops#az-iot-ops-delete) command to delete the entire Azure IoT Operations deployment from a cluster. The `delete` command evaluates the Azure IoT Operations related resources on the cluster and presents a tree view of the resources to be deleted. The cluster should be online when you run this command.
+1. Create a user-assigned managed identity that will be assigned access to the Azure Key Vault.
-The `delete` command removes:
+ ```azurecli
+ az identity create --name "<USER_ASSIGNED_IDENTITY_NAME>" --resource-group "<RESOURCE_GROUP>" --location "<LOCATION>" --subscription "<SUBSCRIPTION>"
+ ```
-* The Azure IoT Operations instance
-* Arc extensions
-* Custom locations
-* Resource sync rules
-* Resources that you can configure in your Azure IoT Operations solution, like assets, MQTT broker, and dataflows.
+1. Configure the Azure IoT Operations instance for secret synchronization. This command:
-```azurecli
-az iot ops delete --cluster <CLUSTER_NAME> --resource-group <RESOURCE_GROUP>
-```
+ * Creates a federated identity credential using the user-assigned managed identity.
+ * Adds a role assignment to the user-assigned managed identity for access to the Azure Key Vault.
+ * Adds a minimum secret provider class associated with the Azure IoT Operations instance.
-### [Azure portal](#tab/portal)
+ ```azurecli
+ az iot ops secretsync enable --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP> --mi-user-assigned <USER_ASSIGNED_MI_RESOURCE_ID> --kv-resource-id <KEYVAULT_RESOURCE_ID>
+ ```
-1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your Azure IoT Operations instance, or search for and select **Azure IoT Operations**.
+1. Create a user-assigned managed identity which can be used for cloud connections. Don't use the same identity as the one used to set up secrets management.
-1. Select the name of your Azure IoT Operations instance.
+ ```azurecli
+ az identity create --name "<USER_ASSIGNED_IDENTITY_NAME>" --resource-group "<RESOURCE_GROUP>" --location "<LOCATION>" --subscription "<SUBSCRIPTION>"
-1. On the **Overview** page of your instance, select **Delete** your instance.
+ You will need to grant the identity permission to whichever cloud resource this will be used for.
-1. Review the list of resources that are and aren't deleted as part of this operation, then type the name of your instance and select **Delete** to confirm.
+1. Run the following command to assign the identity to the Azure IoT Operations instance. This command also creates a federated identity credential using the OIDC issuer of the indicated connected cluster and the Azure IoT Operations service account.
- :::image type="content" source="./media/howto-deploy-iot-operations/delete-instance.png" alt-text="A screenshot that shows deleting an Azure IoT Operations instance in the Azure portal.":::
+ ```azurecli
+ az iot ops identity assign --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP> --mi-user-assigned <USER_ASSIGNED_MI_RESOURCE_ID>
+ ```
-## Update Azure IoT Operations
+While the deployment is in progress, you can watch the resources being applied to your cluster. If your terminal supports it, the `init` and `create` commands display the deployment progress.
+<!--
+ :::image type="content" source="./media/howto-deploy-iot-operations/view-deployment-terminal.png" alt-text="A screenshot that shows the progress of an Azure IoT Operations deployment in a terminal.":::
-Currently, there's no support for updating an existing Azure IoT Operations deployment. Instead, uninstall and redeploy a new version of Azure IoT Operations.
+ Once the **Deploy IoT Operations** phase begins, the text in the terminal becomes a link to view the deployment progress in the Azure portal.
-1. Use the [az iot ops delete](/cli/azure/iot/ops#az-iot-ops-delete) command to delete the Azure IoT Operations deployment on your cluster.
+ :::image type="content" source="./media/howto-deploy-iot-operations/view-deployment-portal.png" alt-text="A screenshot that shows the progress of an Azure IoT Operations deployment in the Azure portal." lightbox="./media/howto-deploy-iot-operations/view-deployment-portal.png"::: -->
- ```azurecli
- az iot ops delete --cluster <CLUSTER_NAME> --resource-group <RESOURCE_GROUP>
- ```
+Otherwise, or if you choose to disable the progress interface with `--no-progress` added to the commands, you can use kubectl commands to view the pods on your cluster:
-1. Update the CLI extension to get the latest Azure IoT Operations version.
+ ```bash
+ kubectl get pods -n azure-iot-operations
+ ```
- ```azurecli
- az extension update --name azure-iot-ops
- ```
+ It can take several minutes for the deployment to complete. Rerun the `get pods` command to refresh your view.
-1. Follow the steps in this article to deploy the newest version of Azure IoT Operations to your cluster.
+After the deployment is complete, use [az iot ops check](/cli/azure/iot/ops#az-iot-ops-check) to evaluate IoT Operations service deployment for health, configuration, and usability. The *check* command can help you find problems in your deployment and configuration.
- >[!TIP]
- >Add the `--ensure-latest` flag to the `az iot ops init` command to check that the latest Azure IoT Operations CLI version is installed and raise an error if an upgrade is available.
+```azurecli
+az iot ops check
+```
+
+You can also check the configurations of topic maps, QoS, and message routes by adding the `--detail-level 2` parameter for a verbose view.
## Next steps
iot-operations Howto Enable Secure Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-enable-secure-settings.md
+
+ Title: Enable secure settings
+description: Enable secure settings on your Azure IoT Operations Preview deployment by configuring an Azure Key Vault and enabling workload identities.
+++ Last updated : 09/24/2024+
+#CustomerIntent: I deployed Azure IoT Operations with test settings for the quickstart scenario, now I want to enable secure settings to use the full feature set.
++
+# Enable secure settings in Azure IoT Operations Preview deployment
++
+The secure settings for Azure IoT Operations include the setup of Secrets Management and user-assigned managed identity for cloud connections, for example, an OPC UA server, or dataflow endpoints.
+
+This article provides instructions for enabling secure settings if you didn't do so during your initial deployment.
+
+## Prerequisites
+
+* An Azure IoT Operations instance deployed with test settings. For example, if you followed the instructions in [Quickstart: Run Azure IoT Operations in Codespaces](../get-started-end-to-end-sample/quickstart-deploy.md).
+
+* Azure CLI installed on your development machine. This scenario requires Azure CLI version 2.64.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
+
+ ```azurecli
+ az extension add --upgrade --name azure-iot-ops
+ ```
+
+## Configure cluster for workload identity
+
+A workload identity is an identity you assign to a software workload (such as an application, service, script, or container) to authenticate and access other services and resources. The workload identity feature needs to be enabled on your cluster, so that the [Azure Key Vault Secret Store extension for Kubernetes](/azure/azure-arc/kubernetes/secret-store-extension) and Azure IoT Operations can access Microsoft Entra ID protected resources. To learn more, see [What are workload identities?](/entra/workload-id/workload-identities-overview).
+
+> [!NOTE]
+> This step only applies to Ubuntu + K3s clusters. The quickstart script for Azure Kubernetes Service (AKS) Edge Essentials used in [Prepare your Azure Arc-enabled Kubernetes cluster](./howto-prepare-cluster.md) enables workload identity by default. If you have an AKS Edge Essentials cluster, continue to the next section.
+
+If you aren't sure whether your K3s cluster already has workload identity enabled or not, run the [az connectedk8s show](/cli/azure/connectedk8s#az-connectedk8s-show) command to check:
+
+```azurecli
+az connectedk8s show --name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --query "{oidcIssuerEnabled:oidcIssuerProfile.enabled, workloadIdentityEnabled: securityProfile.workloadIdentity.enabled}"
+```
+> [!NOTE]
+>You can skip this section if workload identity is already set up.
+
+Use the following steps to enable workload identity on an existing connected K3s cluster:
+
+1. Remove the existing connected k8s cli if any
+ ```azurecli
+ az extension remove --name connectedk8s
+ ```
+
+1. Download and install a preview version of the `connectedk8s` extension for Azure CLI.
+
+ ```azurecli
+ curl -L -o connectedk8s-1.10.0-py2.py3-none-any.whl https://github.com/AzureArcForKubernetes/azure-cli-extensions/raw/refs/heads/connectedk8s/public/cli-extensions/connectedk8s-1.10.0-py2.py3-none-any.whl
+ az extension add --upgrade --source connectedk8s-1.10.0-py2.py3-none-any.whl
+ ```
+
+1. Use the [az connectedk8s update](/cli/azure/connectedk8s#az-connectedk8s-update) command to enable the workload identity feature on the cluster.
+
+ ```azurecli
+ #!/bin/bash
+
+ # Variable block
+ RESOURCE_GROUP="<RESOURCE_GROUP>"
+ CLUSTER_NAME="<CLUSTER_NAME>"
+
+ # Enable workload identity
+ az connectedk8s update --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --enable-oidc-issuer --enable-workload-identity
+ ```
+
+1. Use the [az connectedk8s show](/cli/azure/connectedk8s#az-connectedk8s-show) command to to get the cluster's issuer url. Take a note to add it later in K3s config file.
+
+ ```azurecli
+ #!/bin/bash
+
+ # Variable block
+ RESOURCE_GROUP="<RESOURCE_GROUP>"
+ CLUSTER_NAME="<CLUSTER_NAME>"
+
+ # Get the cluster's issuer url
+ SERVICE_ACCOUNT_ISSUER=$(az connectedk8s show --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --query oidcIssuerProfile.issuerUrl --output tsv)
+ echo "SERVICE_ACCOUNT_ISSUER = $SERVICE_ACCOUNT_ISSUER"
+ ```
+
+1. Create a K3s config file.
+
+ ```bash
+ sudo nano /etc/rancher/k3s/config.yaml
+ ```
+
+1. Add the following content to the config.yaml file:
+
+ ```yml
+ kube-apiserver-arg:
+ - service-account-issuer=<SERVICE_ACCOUNT_ISSUER>
+ - service-account-max-token-expiration=24h
+ ```
+
+1. Save and exit the file editor.
+
+1. Restart k3s.
+
+ ```bash
+ systemctl restart k3s
+ ```
+
+## Set up Secrets Management
+
+Secrets Management for Azure IoT Operations uses Secret Store extension to sync the secrets from an Azure Key Vault and store them on the edge as Kubernetes secrets.
+
+Secret Store extension requires a user-assigned managed identity with access to the Azure Key Vault where secrets are stored. To learn more, see [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview).
+
+### Create an Azure Key Vault
+
+If you already have an Azure Key Vault with `Key Vault Secrets Officer` permissions, you can skip this section.
+
+1. Use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command to create an Azure Key Vault.
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ # Variable block
+ KEYVAULT_NAME="<KEYVAULT_NAME>"
+ RESOURCE_GROUP="<RESOURCE_GROUP>"
+ LOCATION="<LOCATION>"
+
+ # Create the Key Vault
+ az keyvault create --name $KEYVAULT_NAME --resource-group $RESOURCE_GROUP --location $LOCATION --enable-rbac-authorization
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```azurecli
+ # Variable block
+ $KEYVAULT_NAME="<KEYVAULT_NAME>"
+ $RESOURCE_GROUP="<RESOURCE_GROUP>"
+ $LOCATION="<LOCATION>"
+
+ # Create the Key Vault
+ az keyvault create --name $KEYVAULT_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION `
+ --enable-rbac-authorization
+ ```
+
+
+
+1. Use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to give the currently logged-in user `Key Vault Secrets Officer` permissions to the key vault.
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ # Variable block
+ SUBSCRIPTION_ID="<SUBSCRIPTION_ID>"
+ RESOURCE_GROUP="<RESOURCE_GROUP>"
+ KEYVAULT_NAME="<KEYVAULT_NAME>"
+
+ # Get the object ID of the currently logged-in user
+ ASSIGNEE_ID=$(az ad signed-in-user show --query id -o tsv)
+
+ # Assign the "Key Vault Secrets Officer" role
+ az role assignment create --role "Key Vault Secrets Officer" --assignee $ASSIGNEE_ID --scope /subscriptions/$SUBSCRIPTION_ID/resourcegroups/$RESOURCE_GROUP/providers/Microsoft.KeyVault/vaults/$KEYVAULT_NAME
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```azurecli
+ # Variable block
+ $SUBSCRIPTION_ID="<SUBSCRIPTION_ID>"
+ $RESOURCE_GROUP="<RESOURCE_GROUP>"
+ $KEYVAULT_NAME="<KEYVAULT_NAME>"
+
+ # Get the object ID of the currently logged-in user
+ $ASSIGNEE_ID=$(az ad signed-in-user show --query id -o tsv)
+
+ # Assign the "Key Vault Secrets Officer" role
+ az role assignment create --role "Key Vault Secrets Officer" `
+ --assignee $ASSIGNEE_ID `
+ --scope /subscriptions/$SUBSCRIPTION_ID/resourcegroups/$RESOURCE_GROUP/providers/Microsoft.KeyVault/vaults/$KEYVAULT_NAME
+ ```
+
+
+
+### Create a user-assigned managed identity for Secret Store extension
+
+Use the [az identity create](/cli/azure/identity#az-identity-create) command to create the user-assigned managed identity.
+
+# [Bash](#tab/bash)
+
+```azurecli
+# Variable block
+USER_ASSIGNED_MI_NAME="<USER_ASSIGNED_MI_NAME>"
+RESOURCE_GROUP="<RESOURCE_GROUP>"
+LOCATION="LOCATION"
+
+# Create the identity
+az identity create --name $USER_ASSIGNED_MI_NAME --resource-group $RESOURCE_GROUP --location $LOCATION
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+# Variable block
+$USER_ASSIGNED_MI_NAME="USER_ASSIGNED_MI_NAME"
+$RESOURCE_GROUP="<RESOURCE_GROUP>"
+$LOCATION="LOCATION"
+
+# Create the identity
+az identity create --name $USER_ASSIGNED_MI_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION
+```
+++
+### Enable secret synchronization
+
+Use the [az iot ops secretsync enable](/cli/azure/iot/ops) command to set up the Azure IoT Operations instance for secret synchronization. This command:
+
+* Creates a federated identity credential using the user-assigned managed identity.
+* Adds a role assignment to the user-assigned managed identity for access to the Azure Key Vault.
+* Adds a minimum secret provider class associated with the Azure IoT Operations instance.
+
+# [Bash](#tab/bash)
+
+```azurecli
+# Variable block
+INSTANCE_NAME="<INSTANCE_NAME"
+RESOURCE_GROUP="<RESOURCE_GROUP>"
+USER_ASSIGNED_MI_NAME="<USER_ASSIGNED_MI_NAME>"
+KEYVAULT_NAME="<KEYVAULT_NAME>"
+
+#Get the resource ID of the user-assigned managed identity
+USER_ASSIGNED_MI_RESOURCE_ID=$(az identity show --name $USER_ASSIGNED_MI_NAME --resource-group $RESOURCE_GROUP --query id --output tsv)
+
+#Get the resource ID of the key vault
+KEYVAULT_RESOURCE_ID=$(az keyvault show --name $KEYVAULT_NAME --resource-group $RESOURCE_GROUP --query id --output tsv)
+
+#Enable secret synchronization
+az iot ops secretsync enable --name $INSTANCE_NAME --resource-group $RESOURCE_GROUP --mi-user-assigned $USER_ASSIGNED_MI_RESOURCE_ID --kv-resource-id $KEYVAULT_RESOURCE_ID
+```
+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+# Variable block
+INSTANCE_NAME="<INSTANCE_NAME"
+$RESOURCE_GROUP="<RESOURCE_GROUP>"
+$USER_ASSIGNED_MI_NAME="<USER_ASSIGNED_MI_NAME>"
+$KEYVAULT_NAME="<KEYVAULT_NAME>"
+
+# Get the resource ID of the user-assigned managed identity
+$USER_ASSIGNED_MI_RESOURCE_ID=$(az identity show --name $USER_ASSIGNED_MI_NAME --resource-group $RESOURCE_GROUP --query id --output tsv)
+
+# Get the resource ID of the key vault
+$KEYVAULT_RESOURCE_ID=$(az keyvault show --name $KEYVAULT_NAME --resource-group $RESOURCE_GROUP --query id --output tsv)
+
+# Enable secret synchronization
+az iot ops secretsync enable --name $INSTANCE_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --mi-user-assigned $USER_ASSIGNED_MI_RESOURCE_ID `
+ --kv-resource-id $KEYVAULT_RESOURCE_ID
+```
+++
+Now that secret synchronization setup is complete, you can refer to [Manage Secrets](./howto-manage-secrets.md) to learn how to use secrets with Azure IoT Operations.
+
+## Set up user-assigned managed identity for cloud connections
+
+Some Azure IoT Operations components like dataflow endpoints use user-assigned managed identity for cloud connections. It's recommended to use a separate identity from the one used to set up Secrets Management.
+
+1. Create a user-assigned managed identity which can be used for cloud connections. Use the [az identity create](/cli/azure/identity#az-identity-create) command to create the user-assigned managed identity.
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ # Variable block
+ USER_ASSIGNED_MI_NAME="<USER_ASSIGNED_MI_NAME FOR CLOUD CONNECTIONS>"
+ RESOURCE_GROUP="<RESOURCE_GROUP>"
+ LOCATION="LOCATION"
+
+ # Create the identity
+ az identity create --name $USER_ASSIGNED_MI_NAME --resource-group $RESOURCE_GROUP --location $LOCATION
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```azurecli
+ # Variable block
+ $USER_ASSIGNED_MI_NAME="USER_ASSIGNED_MI_NAME FOR CLOUD CONNECTIONS"
+ $RESOURCE_GROUP="<RESOURCE_GROUP>"
+ $LOCATION="LOCATION"
+
+ # Create the identity
+ az identity create --name $USER_ASSIGNED_MI_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION
+ ```
+
+
+
+ > [!NOTE]
+ > You will need to grant the identity permission to whichever cloud resource this will be used for.
+
+1. Use the [az iot ops identity assign](/cli/azure/iot/ops) command to assign the identity to the Azure IoT Operations instance. This command also creates a federated identity credential using the OIDC issuer of the indicated connected cluster and the Azure IoT Operations service account.
+
+ # [Bash](#tab/bash)
+
+ ```azurecli
+ # Variable block
+ INSTANCE_NAME="<INSTANCE_NAME"
+ RESOURCE_GROUP="<RESOURCE_GROUP>"
+ USER_ASSIGNED_MI_NAME="<USER_ASSIGNED_MI_NAME FOR CLOUD CONNECTIONS>"
+
+ #Get the resource ID of the user-assigned managed identity
+ USER_ASSIGNED_MI_RESOURCE_ID=$(az identity show --name $USER_ASSIGNED_MI_NAME --resource-group $RESOURCE_GROUP --query id --output tsv)
+
+ #Assign the identity to the Azure IoT Operations instance
+ az iot ops identity assign --name $INSTANCE_NAME --resource-group $RESOURCE_GROUP --mi-user-assigned $USER_ASSIGNED_MI_RESOURCE_ID
+ ```
+
+ # [PowerShell](#tab/powershell)
+
+ ```azurecli
+ # Variable block
+ $INSTANCE_NAME="<INSTANCE_NAME"
+ $RESOURCE_GROUP="<RESOURCE_GROUP>"
+ $USER_ASSIGNED_MI_NAME="<USER_ASSIGNED_MI_NAME FOR CLOUD CONNECTIONS>"
+
+ # Get the resource ID of the user-assigned managed identity
+ $USER_ASSIGNED_MI_RESOURCE_ID=$(az identity show --name $USER_ASSIGNED_MI_NAME --resource-group $RESOURCE_GROUP --query id --output tsv)
+
+
+ #Assign the identity to the Azure IoT Operations instance
+ az iot ops identity assign --name $INSTANCE_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --mi-user-assigned $USER_ASSIGNED_MI_RESOURCE_ID
+ ```
+
+
+
+Now, you can use this managed identity in dataflow endpoints for cloud connections.
iot-operations Howto Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-manage-secrets.md
Title: Manage secrets
-description: Create, update, and manage secrets that are required to give your Arc-connected cluster access to Azure resources.
---
+description: Create, update, and manage secrets that are required to give your Arc-enabled Kubernetes cluster access to Azure resources.
++ Previously updated : 03/21/2024- Last updated : 09/24/2024
-#CustomerIntent: As an IT professional, I want prepare an Azure-Arc enabled Kubernetes cluster with Key Vault secrets so that I can deploy Azure IoT Operations to it.
+#CustomerIntent: As an IT professional, I want to manage secrets in Azure IoT Operations, by leveraging Key Vault and Azure Secrete Store to sync the secrets down from the cloud and store them on the edge as Kubernetes secrets.
# Manage secrets for your Azure IoT Operations Preview deployment [!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-Secrets management in Azure IoT Operations Preview uses Azure Key Vault as the managed vault solution on the cloud and uses the secrets store CSI driver to pull secrets down from the cloud and store them on the edge.
+Azure IoT Operations uses Azure Key Vault as the managed vault solution on the cloud, and uses [Azure Key Vault Secret Store extension for Kubernetes](/azure/azure-arc/kubernetes/secret-store-extension) to sync the secrets down from the cloud and store them on the edge as Kubernetes secrets.
## Prerequisites
-* An Arc-enabled Kubernetes cluster. For more information, see [Prepare your cluster](./howto-prepare-cluster.md).
+* An Azure IoT Operations instance deployed with secure settings. If you deployed Azure IoT Operations with test settings and now want to use secrets, you need to first [enable secure settings](./howto-enable-secure-settings.md).
-## Configure a secret store on your cluster
+* Creating secrets in the key vault requires **Secrets officer** permissions at the resource level. For information about assigning roles to users, see [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
-Azure IoT Operations supports Key Vault for storing secrets and certificates. The `az iot ops init` Azure CLI command automates the steps to set up a service principal to give access to the key vault and configure the secrets that you need for running Azure IoT Operations.
+## Add and use secrets
-For more information, see [Deploy Azure IoT Operations Preview to an Arc-enabled Kubernetes cluster](howto-deploy-iot-operations.md?tabs=cli).
+Secrets management for Azure IoT Operations uses Secret Store extension to sync the secrets from an Azure Key Vault and store them on the edge as Kubernetes secrets. When you enabled secure settings during deployment, you selected an Azure Key Vault for secret management. It is in this Key Vault where all secrets to be used within Azure IoT Operations are stored.
-## Configure service principal and Key Vault manually
+> [!NOTE]
+> Azure IoT Operations instances work with only one Azure Key Vault, multiple key vaults per instance isn't supported.
-If the Azure account executing the `az iot ops init` command doesn't have permissions to query the Microsoft Graph and create service principals, you can prepare these upfront and use extra arguments when running the CLI command as described in [Deploy Azure IoT Operations Preview to an Arc-enabled Kubernetes cluster](howto-deploy-iot-operations.md?tabs=cli).
+Once the setup secrets management steps are completed, you can start adding secrets to Azure Key Vault, and sync them to the edge to be used in **Asset Endpoints** or **Dataflow Endpoints** using the [operations experience](https://iotoperations.azure.com) web UI.
-### Configure service principal for interacting with Key Vault via Microsoft Entra ID
+Secrets are used in asset endpoints and dataflow endpoints for authentication. In this section, we use asset endpoints as an example, the same can be applied to dataflow endpoints. You have the option to directly create the secret in Azure Key Vault and have it automatically synchronized down to the edge, or use an existing secret reference from the key vault:
-Follow these steps to create a new Application Registration for the Azure IoT Operations application to use to authenticate to Key Vault.
-First, register an application with Microsoft Entra ID:
+- **Create a new secret**: creates a secret reference in the Azure Key Vault and also automatically synchronizes the secret down to the edge using Secret Store extension. Use this option if you didn't create the secret you require for this scenario in the key vault beforehand.
-1. In the Azure portal search bar, search for and select **Microsoft Entra ID**.
+- **Add from Azure Key Vault**: synchronizes an existing secret in key vault down to the edge if it wasn't synchronized before. Selecting this option shows you the list of secret references in the selected key vault. Use this option if you created the secret in the key vault beforehand.
-1. Select **App registrations** from the **Manage** section of the Microsoft Entra ID menu.
+When you add the username and password references to the asset endpoints or dataflow endpoints, you then need to give the synchronized secret a name. The secret references will be saved in the edge with this given name as one resource. In the example from the screenshot below, the username and password references are saved to the edge as *edp1secrets*.
-1. Select **New registration**.
-1. On the **Register an application** page, provide the following information:
+## Manage Synced Secrets
- | Field | Value |
- | -- | -- |
- | **Name** | Provide a name for your application. |
- | **Supported account types** | Ensure that **Accounts in this organizational directory only (<YOUR_TENANT_NAME> only - Single tenant)** is selected. |
- | **Redirect URI** | Select **Web** as the platform. You can leave the web address empty. |
+You can use **Manage secrets** for asset endpoints and dataflow endpoints to manage synchronized secrets. Manage secrets shows the list of all current synchronized secrets at the edge for the resource you are viewing. A synced secret represents one or multiple secret references, depending on the resource using it. Any operation applied to a synced secret will be applied to all secret references contained within the synced secret.
-1. Select **Register**.
+You can delete synced secrets as well in manage secrets. When you delete a synced secret, it only deletes the synced secret from the edge, and doesn't delete the contained secret reference from key vault.
- When your application is created, you're directed to its resource page.
-
-1. Copy the **Application (client) ID** from the app registration overview page. You'll use this value as an argument when running Azure IoT Operations deployment with the `az iot ops init` command.
-
-Next, give your application permissions for Key Vault:
-
-1. On the resource page for your app, select **API permissions** from the **Manage** section of the app menu.
-
-1. Select **Add a permission**.
-
-1. On the **Request API permissions** page, scroll down and select **Azure Key Vault**.
-
-1. Select **Delegated permissions**.
-
-1. Check the box to select **user_impersonation** permissions.
-
-1. Select **Add permissions**.
-
-Create a client secret that is added to your Kubernetes cluster to authenticate to your key vault:
-
-1. On the resource page for your app, select **Certificates & secrets** from the **Manage** section of the app menu.
-
-1. Select **New client secret**.
-
-1. Provide an optional description for the secret, then select **Add**.
-
-1. Copy the **Value** from your new secret. You'll use this value later when you run `az iot ops init`.
-
-Retrieve the service principal Object ID:
-
-1. On the **Overview** page for your app, under the **Essentials** section, select the **Application name** link under **Managed application in local directory**. This opens the Enterprise Application properties. Copy the Object ID to use when you run `az iot ops init`.
-
-### Create a key vault
-
-Create a new Azure Key Vault instance and ensure that it has the **Permission Model** set to **Vault access policy**.
-
-```bash
-az keyvault create --enable-rbac-authorization false --name "<your unique key vault name>" --resource-group "<the name of the resource group>"
-```
-If you have an existing key vault, you can change the permission model by executing the following:
-
-```bash
-az keyvault update --name "<your unique key vault name>" --resource-group "<the name of the resource group>" --enable-rbac-authorization false
-```
-You'll need the Key Vault resource ID when you run `az iot ops init`. To retrieve the resource ID, run:
-
-```bash
-az keyvault show --name "<your unique key vault name>" --resource-group "<the name of the resource group>" --query id -o tsv
-```
-
-### Set service principal access policy in Key Vault
-
-The newly created service principal needs **secret** `list` and `get` access policy for the Azure IoT Operations to work with the secret store.
-
-To manage Key Vault access policies, the principal logged in to the CLI needs sufficient Azure permissions. In the Role Based Access Control (RBAC) model, this permission is included in Key Vault contributor or higher roles.
-
->[!TIP]
->If you used the logged-in CLI principal to create the key vault, then you probably already have the right permissions. However, if you're pointing to a different or existing key vault then you should check that you have sufficient permissions to set access policies.
-
-Run the following to assign **secret** `list` and `get` permissions to the service principal.
-
-```bash
-az keyvault set-policy --name "<your unique key vault name>" --resource-group "<the name of the resource group>" --object-id <Object ID copied from Enterprise Application SP in Microsoft Entra ID> --secret-permissions get list
-```
-
-### Pass service principal and Key Vault arguments to Azure IoT Operations deployment
-
-When following the guide [Deploy Azure IoT Operations Preview to an Arc-enabled Kubernetes cluster](howto-deploy-iot-operations.md?tabs=cli), pass in additional flags to the `az iot ops init` command in order to use the preconfigured service principal and key vault.
-
-The following example shows how to prepare the cluster for Azure IoT Operations without fully deploying it by using `--no-deploy` flag. You can also run the command without this argument for a default Azure IoT Operations deployment.
-
-```bash
-az iot ops init --cluster "<your cluster name>" --resource-group "<the name of the resource group>" \
- --kv-id <Key Vault Resource ID> \
- --sp-app-id <Application registration App ID (client ID) from Microsoft Entra ID> \
- --sp-object-id <Object ID copied from Enterprise Application in Microsoft Entra ID> \
- --sp-secret "<Client Secret from App registration in Microsoft Entra ID>" \
- --no-deploy
-```
-
-One step that the `init` command takes is to ensure all Secret Provider Classes (SPCs) required by Azure IoT Operations have a default secret configured in key vault. If a value for the default secret does not exist `init` will create one. This step requires that the principal logged in to the CLI has secret `set` permissions. If you want to use an existing secret as the default SPC secret, you can specify it with the `--kv-sat-secret-name` parameter, in which case the logged in principal only needs secret `get` permissions.
-
-## Add a secret to an Azure IoT Operations component
-
-Once you have the secret store set up on your cluster, you can create and add Key Vault secrets.
-
-1. Create your secret in Key Vault with whatever name and value you need. You can create a secret by using the [Azure portal](https://portal.azure.com) or the [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set) command.
-
-1. On your cluster, identify the secret provider class (SPC) for the component that you want to add the secret to. For example, `aio-default-spc`. Use the following command to list all SPCs on your cluster:
-
- ```bash
- kubectl get secretproviderclasses -A
- ```
-
-1. Open the file in your preferred text editor. If you use k9s, type `e` to edit.
-
-1. Add the secret object to the list under `spec.parameters.objects.array`. For example:
-
- ```yml
- spec:
- parameters:
- keyvaultName: my-key-vault
- objects: |
- array:
- - |
- objectName: PlaceholderSecret
- objectType: secret
- objectVersion: ""
- ```
-
-1. Save your changes and apply them to your cluster. If you use k9s, your changes are automatically applied.
-
-The CSI driver updates secrets by using a polling interval, therefore the new secret isn't available to the pod until the next polling interval. To update a component immediately, restart the pods for the component. For example, to restart the Data Processor component, run the following commands:
-
-```console
-kubectl delete pod aio-dp-reader-worker-0 -n azure-iot-operations
-kubectl delete pod aio-dp-runner-worker-0 -n azure-iot-operations
-```
+> [!NOTE]
+> Before deleting a synced secret, make sure that all references to the secret from Azure IoT Operations components are removed.
iot-operations Howto Manage Update Uninstall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-manage-update-uninstall.md
+
+ Title: Manage, update, or uninstall
+description: Use the Azure CLI or Azure portal to manage your Azure IoT Operations instances, including updating and uninstalling.
++++ Last updated : 09/23/2024+
+#CustomerIntent: As an OT professional, I want to manage Azure IoT Operations instances.
++
+# Manage the lifecycle of an Azure IoT Operations instance
++
+Use the Azure CLI and Azure portal to manage, uninstall, or update Azure IoT Operations instances.
+
+## Prerequisites
+
+* An Azure IoT Operations instance deployed to a cluster. For more information, see [Deploy Azure IoT Operations](./howto-deploy-iot-operations.md).
+
+* Azure CLI installed on your development machine. This scenario requires Azure CLI version 2.64.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
+
+ ```azurecli
+ az extension add --upgrade --name azure-iot-ops
+ ```
+
+## Manage
+
+After deployment, you can use the Azure CLI and Azure portal to view and manage your Azure IoT Operations instance.
+
+### List instances
+
+#### [Azure CLI](#tab/cli)
+
+Use the `az iot ops list` command to see all of the Azure IoT Operations instances in your subscription or resource group.
+
+The basic command returns all instances in your subscription.
+
+```azurecli
+az iot ops list
+```
+
+To filter the results by resource group, add the `--resource-group` parameter.
+
+```azurecli
+az iot ops list --resource-group <RESOURCE_GROUP>
+```
+
+#### [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), search for and select **Azure IoT Operations**.
+1. Use the filters to view Azure IoT Operations instances based on subscription, resource group, and more.
+++
+### View instance
+
+#### [Azure CLI](#tab/cli)
+
+Use the `az iot ops show` command to view the properties of an instance.
+
+```azurecli
+az iot ops show --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP>
+```
+
+You can also use the `az iot ops show` command to view the resources in your Azure IoT Operations deployment in the Azure CLI. Add the `--tree` flag to show a tree view of the deployment that includes the specified Azure IoT Operations instance.
+
+```azurecli
+az iot ops show --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP> --tree
+```
+
+The tree view of a deployment looks like the following example:
+
+```bash
+MyCluster
+Γö£ΓöÇΓöÇ extensions
+Γöé Γö£ΓöÇΓöÇ akvsecretsprovider
+Γöé Γö£ΓöÇΓöÇ azure-iot-operations-ltwgs
+Γöé ΓööΓöÇΓöÇ azure-iot-operations-platform-ltwgs
+ΓööΓöÇΓöÇ customLocations
+ ΓööΓöÇΓöÇ MyCluster-cl
+ Γö£ΓöÇΓöÇ resourceSyncRules
+ ΓööΓöÇΓöÇ resources
+ Γö£ΓöÇΓöÇ MyCluster-ops-init-instance
+ ΓööΓöÇΓöÇ MyCluster-observability
+```
+
+You can run `az iot ops check` on your cluster to assess health and configurations of individual Azure IoT Operations components. By default, the command checks MQ but you can [specify the service](/cli/azure/iot/ops#az-iot-ops-check-examples) with `--ops-service` parameter.
+
+#### [Azure portal](#tab/portal)
+
+You can view your Azure IoT Operations instance in the Azure portal.
+
+1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your Azure IoT Operations instance, or search for and select **Azure IoT Operations**.
+
+1. Select the name of your Azure IoT Operations instance.
+
+1. On the **Overview** page of your instance, the **Arc extensions** table displays the resources that were deployed to your cluster.
+
+ :::image type="content" source="../get-started-end-to-end-sample/media/quickstart-deploy/view-instance.png" alt-text="Screenshot that shows the Azure IoT Operations instance on your Arc-enabled cluster." lightbox="../get-started-end-to-end-sample/media/quickstart-deploy/view-instance.png":::
+++
+### Update instance tags and description
+
+#### [Azure CLI](#tab/cli)
+
+Use the `az iot ops update` command to edit the tags and description parameters of your Azure IoT Operations instance. The values provided in the `update` command replace any existing tags or description
+
+```azurecli
+az iot ops update --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP> --desc "<INSTANCE_DESCRIPTION>" --tags <TAG_NAME>=<TAG-VALUE> <TAG_NAME>=<TAG-VALUE>
+```
+
+To delete all tags on an instance, set the tags parameter to a null value. For example:
+
+```azurecli
+az iot ops update --name <INSTANCE_NAME> --resource-group --tags ""
+```
+
+#### [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your Azure IoT Operations instance, or search for and select **Azure IoT Operations**.
+
+1. Select the name of your Azure IoT Operations instance.
+
+1. On the **Overview** page of your instance, select **Add tags** or **edit** to modify tags on your instance.
+++
+## Uninstall
+
+The Azure CLI and Azure portal offer different options for uninstalling Azure IoT Operations.
+
+The Azure portal steps can delete an Azure IoT Operations instance, but can't affect the related resources in the deployment. If you want to delete the entire deployment, use the Azure CLI.
+
+### [Azure CLI](#tab/cli)
+
+Use the [az iot ops delete](/cli/azure/iot/ops#az-iot-ops-delete) command to delete the entire Azure IoT Operations deployment from a cluster. The `delete` command evaluates the Azure IoT Operations related resources on the cluster and presents a tree view of the resources to be deleted. The cluster should be online when you run this command.
+
+The `delete` command streamlines the redeployment of Azure IoT Operations to the same cluster. It undoes the `create` command so that you can run `create`, `delete`, `create` again and so on without having to rerun `init`.
+
+The `delete` command removes:
+
+* The Azure IoT Operations instance
+* Arc extensions
+* Custom locations
+* Resource sync rules
+* Resources that you can configure in your Azure IoT Operations solution, like assets, MQTT broker, and dataflows.
+
+```azurecli
+az iot ops delete --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP>
+```
+
+To delete the instance and also remove the Azure IoT Operations dependencies (the output of `init`), add the flag `--include-deps`.
+
+```az iot ops delete --name <INSTANCE_NAME> --resource-group <RESOURCE_GROUP> --include-deps
+```
+
+### [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your Azure IoT Operations instance, or search for and select **Azure IoT Operations**.
+
+1. Select the name of your Azure IoT Operations instance.
+
+1. On the **Overview** page of your instance, select **Delete** your instance.
+
+1. Review the list of resources that are and aren't deleted as part of this operation, then type the name of your instance and select **Delete** to confirm.
+
+ :::image type="content" source="./media/howto-deploy-iot-operations/delete-instance.png" alt-text="A screenshot that shows deleting an Azure IoT Operations instance in the Azure portal.":::
+++
+## Update
+
+Currently, there's no support for updating an existing Azure IoT Operations deployment. Instead, uninstall and redeploy a new version of Azure IoT Operations.
+
+1. Use the [az iot ops delete](/cli/azure/iot/ops#az-iot-ops-delete) command to delete the Azure IoT Operations deployment on your cluster.
+
+ ```azurecli
+ az iot ops delete --cluster <CLUSTER_NAME> --resource-group <RESOURCE_GROUP>
+ ```
+
+1. Update the CLI extension to get the latest Azure IoT Operations version.
+
+ ```azurecli
+ az extension update --name azure-iot-ops
+ ```
+
+1. Follow the steps in this article to deploy the newest version of Azure IoT Operations to your cluster.
+
+ >[!TIP]
+ >Add the `--ensure-latest` flag to the `az iot ops init` command to check that the latest Azure IoT Operations CLI version is installed and raise an error if an upgrade is available.
iot-operations Howto Prepare Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md
Title: Prepare your Kubernetes cluster description: Prepare an Azure Arc-enabled Kubernetes cluster before you deploy Azure IoT Operations. This article includes guidance for both Ubuntu and Windows machines.--++ Previously updated : 07/22/2024 Last updated : 10/02/2024 #CustomerIntent: As an IT professional, I want prepare an Azure-Arc enabled Kubernetes cluster so that I can deploy Azure IoT Operations to it.
Last updated 07/22/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-An Azure Arc-enabled Kubernetes cluster is a prerequisite for deploying Azure IoT Operations Preview. This article describes how to prepare an Azure Arc-enabled Kubernetes cluster before you [Deploy Azure IoT Operations Preview to an Arc-enabled Kubernetes cluster](howto-deploy-iot-operations.md) to run your own workloads. This article includes guidance for both Ubuntu, Windows, and cloud environments.
+An Azure Arc-enabled Kubernetes cluster is a prerequisite for deploying Azure IoT Operations Preview. This article describes how to prepare a cluster before you [Deploy Azure IoT Operations Preview to an Arc-enabled Kubernetes cluster](howto-deploy-iot-operations.md). This article includes guidance for both Ubuntu and Windows.
> [!TIP]
-> If you want to deploy Azure IoT Operations and run a sample workload, see the [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md).
-
+> The steps in this article prepare your cluster for a secure settings deployment, which is a longer but production-ready process. If you want to deploy Azure IoT Operations quickly and run a sample workload with only test settings, see the [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md) instead.
+>
+> For more information about test settings and secure settings, see [Deployment details > Choose your features](./overview-deploy.md#choose-your-features).
## Prerequisites
-To prepare your Azure Arc-enabled Kubernetes cluster, you need:
+Azure IoT Operations should work on any Arc-enabled Kubernetes cluster that meets the [Azure Arc-enabled Kubernetes system requirements](/azure/azure-arc/kubernetes/system-requirements). Currently Azure IoT Operations doesn't support Arm64 architectures.
+
+Microsoft supports Azure Kubernetes Service (AKS) Edge Essentials for deployments on Windows and K3s for deployments on Ubuntu. For a list of specific hardware and software combinations that are tested and validated, see [Validated environments](../overview-iot-operations.md#validated-environments).
-* Hardware that meets the [system requirements](/azure/azure-arc/kubernetes/system-requirements).
+If you want to deploy Azure IoT Operations to a multi-node solution, use K3s on Ubuntu.
+
+To prepare your Azure Arc-enabled Kubernetes cluster, you need:
### [AKS Edge Essentials](#tab/aks-edge-essentials) * An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Azure CLI version 2.46.0 or newer installed on your development machine. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+* Azure CLI version 2.64.0 or newer installed on your development machine. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
-* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
+* The latest version of the Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
```bash az extension add --upgrade --name azure-iot-ops
To prepare your Azure Arc-enabled Kubernetes cluster, you need:
* Hardware that meets the system requirements:
- * Ensure that your machine has a minimum of 10-GB RAM, 4 vCPUs, and 40-GB free disk space.
- * Review the [AKS Edge Essentials requirements and support matrix](/azure/aks/hybrid/aks-edge-system-requirements).
- * Review the [AKS Edge Essentials networking guidance](/azure/aks/hybrid/aks-edge-concept-networking).
+ * Ensure that your machine has a minimum of 16-GB available RAM, 8 available vCPUs, and 52-GB free disk space reserved for Azure IoT Operations.
+ * [Azure Arc-enabled Kubernetes system requirements](/azure/azure-arc/kubernetes/system-requirements).
+ * [AKS Edge Essentials requirements and support matrix](/azure/aks/hybrid/aks-edge-system-requirements).
+ * [AKS Edge Essentials networking guidance](/azure/aks/hybrid/aks-edge-concept-networking).
+
+* If you're going to deploy Azure IoT Operations to a multi-node cluster with fault tolerance enabled, review the hardware and storage requirements in [Prepare Linux for Edge Volumes](/azure/azure-arc/container-storage/prepare-linux-edge-volumes).
### [Ubuntu](#tab/ubuntu) * An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Azure CLI version 2.46.0 or newer installed on your development machine. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+* Azure CLI version 2.64.0 or newer installed on your development machine. Use `az --version` to check your version and `az upgrade` to update if necessary. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
-* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
+* The latest version of the Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
```bash az extension add --upgrade --name azure-iot-ops ```
-* Review the [K3s requirements](https://docs.k3s.io/installation/requirements).
-
-Azure IoT Operations also works on Ubuntu in Windows Subsystem for Linux (WSL) on your Windows machine. Use WSL for testing and development purposes only.
-
-To set up your WSL Ubuntu environment:
-
-1. [Install Linux on Windows with WSL](/windows/wsl/install).
-
-1. Enable `systemd`:
-
- ```bash
- sudo -e /etc/wsl.conf
- ```
-
- Add the following to _wsl.conf_ and then save the file:
-
- ```text
- [boot]
- systemd=true
- ```
-
-1. After you enable `systemd`, [re-enable running windows executables from WSL](https://github.com/microsoft/WSL/issues/8843):
-
- ```bash
- sudo sh -c 'echo :WSLInterop:M::MZ::/init:PF > /usr/lib/binfmt.d/WSLInterop.conf'
- sudo systemctl unmask systemd-binfmt.service
- sudo systemctl restart systemd-binfmt
- sudo systemctl mask systemd-binfmt.service
- ```
-
-### [Codespaces](#tab/codespaces)
-
-* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-* A [GitHub](https://github.com) account.
+* Hardware that meets the system requirements:
-* Visual Studio Code installed on your development machine. For more information, see [Download Visual Studio Code](https://code.visualstudio.com/download).
+ * Ensure that your machine has a minimum of 16-GB available RAM and 8 available vCPUs reserved for Azure IoT Operations.
+ * [Azure Arc-enabled Kubernetes system requirements](/azure/azure-arc/kubernetes/system-requirements).
+ * [K3s requirements](https://docs.k3s.io/installation/requirements).
## Create a cluster
-This section provides steps to prepare and Arc-enable clusters in validated environments on Linux and Windows as well as GitHub Codespaces in the cloud.
+This section provides steps to create clusters in validated environments on Linux and Windows.
### [AKS Edge Essentials](#tab/aks-edge-essentials)
-[Azure Kubernetes Service Edge Essentials](/azure/aks/hybrid/aks-edge-overview) is an on-premises Kubernetes implementation of Azure Kubernetes Service (AKS) that automates running containerized applications at scale. AKS Edge Essentials includes a Microsoft-supported Kubernetes platform that includes a lightweight Kubernetes distribution with a small footprint and simple installation experience, making it easy for you to deploy Kubernetes on PC-class or "light" edge hardware.
+[Azure Kubernetes Service Edge Essentials](/azure/aks/hybrid/aks-edge-overview) is an on-premises Kubernetes implementation of Azure Kubernetes Service (AKS) that automates running containerized applications at scale. AKS Edge Essentials includes a Microsoft-supported Kubernetes platform that includes a lightweight Kubernetes distribution with a small footprint and simple installation experience that supports PC-class or "light" edge hardware.
-The [AksEdgeQuickStartForAio.ps1](https://github.com/Azure/AKS-Edge/blob/main/tools/scripts/AksEdgeQuickStart/AksEdgeQuickStartForAio.ps1) script automates the the process of creating and connecting a cluster, and is the recommended path for deploying Azure IoT Operations on AKS Edge Essentials.
+The [AksEdgeQuickStartForAio.ps1](https://github.com/Azure/AKS-Edge/blob/main/tools/scripts/AksEdgeQuickStart/AksEdgeQuickStartForAio.ps1) script automates the process of creating and connecting a cluster, and is the recommended path for deploying Azure IoT Operations on AKS Edge Essentials.
1. Open an elevated PowerShell window and change the directory to a working folder.
+1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses in your tenant.
+
+ ```azurecli
+ az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv
+ ```
+ 1. Run the following commands, replacing the placeholder values with your information: | Placeholder | Value |
The [AksEdgeQuickStartForAio.ps1](https://github.com/Azure/AKS-Edge/blob/main/to
| RESOURCE_GROUP_NAME | The name of an existing resource group or a name for a new resource group to be created. | | LOCATION | An Azure region close to you. For the list of currently supported Azure regions, see [Supported regions](../overview-iot-operations.md#supported-regions). | | CLUSTER_NAME | A name for the new cluster to be created. |
+ | ARC_APP_OBJECT_ID | The object ID value that you retrieved in the previous step. |
```powershell $url = "https://raw.githubusercontent.com/Azure/AKS-Edge/main/tools/scripts/AksEdgeQuickStart/AksEdgeQuickStartForAio.ps1" Invoke-WebRequest -Uri $url -OutFile .\AksEdgeQuickStartForAio.ps1 Unblock-File .\AksEdgeQuickStartForAio.ps1 Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process -Force
- .\AksEdgeQuickStartForAio.ps1 -SubscriptionId "<SUBSCRIPTION_ID>" -TenantId "<TENANT_ID>" -ResourceGroupName "<RESOURCE_GROUP_NAME>" -Location "<LOCATION>" -ClusterName "<CLUSTER_NAME>"
+ .\AksEdgeQuickStartForAio.ps1 -SubscriptionId "<SUBSCRIPTION_ID>" -TenantId "<TENANT_ID>" -ResourceGroupName "<RESOURCE_GROUP_NAME>" -Location "<LOCATION>" -ClusterName "<CLUSTER_NAME>" -CustomLocationOid "<ARC_APP_OBJECT_ID>"
``` If there are any issues during deployment, including if your machine reboots as part of this process, run the whole set of commands again.
The [AksEdgeQuickStartForAio.ps1](https://github.com/Azure/AKS-Edge/blob/main/to
### [Ubuntu](#tab/ubuntu)
-Azure IoT Operations should work on any CNCF-conformant kubernetes cluster. For Ubuntu Linux, Microsoft currently supports K3s clusters.
-
-> [!IMPORTANT]
-> If you're using Ubuntu in Windows Subsystem for Linux (WSL), run all of these steps in your WSL environment, including the Azure CLI steps for configuring your cluster.
- To prepare a K3s Kubernetes cluster on Ubuntu:
-1. Run the K3s installation script:
+1. Install K3s following the instructions in the [K3s quick-start guide](https://docs.k3s.io/quick-start).
+
+1. Check to see that kubectl was installed as part of K3s. If not, follow the instructions to [Install kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/).
```bash
- curl -sfL https://get.k3s.io | sh -
+ kubectl version --client
```
- For full installation information, see the [K3s quick-start guide](https://docs.k3s.io/quick-start).
+1. Follow the instructions to [Install Helm](https://helm.sh/docs/intro/install/).
1. Create a K3s configuration yaml file in `.kube/config`:
To prepare a K3s Kubernetes cluster on Ubuntu:
export KUBECONFIG=~/.kube/config #switch to k3s context kubectl config use-context default
+ sudo chmod 644 /etc/rancher/k3s/k3s.yaml
``` 1. Run the following command to increase the [user watch/instance limits](https://www.suse.com/support/kb/doc/?id=000020048).
To prepare a K3s Kubernetes cluster on Ubuntu:
sudo sysctl -p ```
-### [Codespaces](#tab/codespaces)
+### Configure multi-node clusters for Azure Container Storage
+
+On multi-node clusters with at least three nodes, you have the option of enabling fault tolerance for storage with [Azure Container Storage enabled by Azure Arc](/azure/azure-arc/container-storage/overview) when you deploy Azure IoT Operations. If you want to enable that option, prepare your multi-node cluster with the following steps:
+
+1. Install the required NVME over TCP module for your kernel using the following command:
+
+ ```bash
+ sudo apt install linux-modules-extra-`uname -r`
+ ```
+
+ > [!NOTE]
+ > The minimum supported Linux kernel version is 5.1. At this time, there are known issues with 6.4 and 6.2. For the latest information, refer to [Azure Container Storage release notes](/azure/azure-arc/edge-storage-accelerator/release-notes)
-> [!IMPORTANT]
-> Codespaces are easy to set up quickly and tear down later, but they're not suitable for performance evaluation or scale testing. Use GitHub Codespaces for exploration only.
+1. On each node in your cluster, set the number of **HugePages** to 512 using the following command:
+ ```bash
+ HUGEPAGES_NR=512
+ echo $HUGEPAGES_NR | sudo tee /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+ echo "vm.nr_hugepages=$HUGEPAGES_NR" | sudo tee /etc/sysctl.d/99-hugepages.conf
+ ```
The **AksEdgeQuickStartForAio.ps1** script that you ran in the previous section
To connect your cluster to Azure Arc:
-1. On the machine where you deployed the Kubernetes cluster, or in your WSL environment, sign in with Azure CLI:
+1. On the machine where you deployed the Kubernetes cluster, sign in with Azure CLI:
```azurecli az login ```
-1. Set environment variables for your Azure subscription, location, a new resource group, and the cluster name as it will show up in your resource group.
+ If at any point you get an error that says *Your device is required to be managed to access your resource*, run `az login` again and make sure that you sign in interactively with a browser.
+
+1. Set environment variables for your Azure subscription, location, a new resource group, and the cluster name as you want it to show up in your resource group.
For the list of currently supported Azure regions, see [Supported regions](../overview-iot-operations.md#supported-regions).
To connect your cluster to Azure Arc:
export CLUSTER_NAME=<NEW_CLUSTER_NAME> ```
-1. Set the Azure subscription context for all commands:
-
- ```azurecli
- az account set -s $SUBSCRIPTION_ID
- ```
-
-1. Register the required resource providers in your subscription:
-
- >[!NOTE]
- >This step only needs to be run once per subscription. To register resource providers, you need permission to do the `/register/action` operation, which is included in subscription Contributor and Owner roles. For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
-
- ```azurecli
- az provider register -n "Microsoft.ExtendedLocation"
- az provider register -n "Microsoft.Kubernetes"
- az provider register -n "Microsoft.KubernetesConfiguration"
- az provider register -n "Microsoft.IoTOperationsOrchestrator"
- az provider register -n "Microsoft.IoTOperations"
- az provider register -n "Microsoft.DeviceRegistry"
- ```
-
-1. Use the [az group create](/cli/azure/group#az-group-create) command to create a resource group in your Azure subscription to store all the resources:
-
- ```azurecli
- az group create --location $LOCATION --resource-group $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID
- ```
-
-1. Use the [az connectedk8s connect](/cli/azure/connectedk8s#az-connectedk8s-connect) command to Arc-enable your Kubernetes cluster and manage it as part of your Azure resource group:
-
- ```azurecli
- az connectedk8s connect -n $CLUSTER_NAME -l $LOCATION -g $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID
- ```
-
-1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses and save it as an environment variable.
-
- ```azurecli
- export OBJECT_ID=$(az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv)
- ```
-
-1. Use the [az connectedk8s enable-features](/cli/azure/connectedk8s#az-connectedk8s-enable-features) command to enable custom location support on your cluster. This command uses the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses. Run this command on the machine where you deployed the Kubernetes cluster:
-
- ```azurecli
- az connectedk8s enable-features -n $CLUSTER_NAME -g $RESOURCE_GROUP --custom-locations-oid $OBJECT_ID --features cluster-connect custom-locations
- ```
-
-### [Codespaces](#tab/codespaces)
-
To verify that your cluster is ready for Azure IoT Operations deployment, you ca
az iot ops verify-host ```
-To verify that your Kubernetes cluster is now Azure Arc-enabled, run the following command:
+To verify that your Kubernetes cluster is Azure Arc-enabled, run the following command:
```console kubectl get deployments,pods -n azure-arc
pod/resource-sync-agent-769bb66b79-z9n46 2/2 Running 0
pod/metrics-agent-6588f97dc-455j8 2/2 Running 0 10m ```
-## Create sites
-
-A _site_ is a collection of Azure IoT Operations instances. Sites typically group instances by physical location and make it easier for OT users to locate and manage assets. An IT administrator creates sites and assigns Azure IoT Operations instances to them. To learn more, see [What is Azure Arc site manager (preview)?](/azure/azure-arc/site-manager/overview).
- ## Next steps Now that you have an Azure Arc-enabled Kubernetes cluster, you can [deploy Azure IoT Operations](howto-deploy-iot-operations.md).
iot-operations Overview Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/overview-deploy.md
+
+ Title: Deployment overview
+description: Learn about the components that are included in an Azure IoT Operations deployment and the different deployment options to consider for your scenario.
++++ Last updated : 10/02/2024+
+#CustomerIntent: As an IT professional, I want to understand the components and deployment details before I start using Azure IoT Operations.
++
+# Deployment details
++
+## Supported environments
+
+Azure IoT Operations should work on any Arc-enabled Kubernetes cluster that meets the [Azure Arc-enabled Kubernetes system requirements](/azure/azure-arc/kubernetes/system-requirements). Currently Azure IoT Operations doesn't support Arm64 architectures.
+
+Microsoft supports Azure Kubernetes Service (AKS) Edge Essentials for deployments on Windows and K3s for deployments on Ubuntu. For a list of specific hardware and software combinations that are tested and validated, see [Validated environments](../overview-iot-operations.md#validated-environments).
+
+## Choose your features
+
+Azure IoT Operations offers two deployment modes. You can choose to deploy with *test settings*, a basic subset of features that are simpler to get started with for evaluation scenarios. Or, you can choose to deploy with *secure settings*, the full feature set.
+
+### Test settings deployment
+
+A deployment with only test settings enabled:
+
+* Doesn't configure secrets or user-assigned managed identity capabilities.
+* Is meant to enable the end-to-end quickstart sample for evaluation purposes, so does support the OPC PLC simulator and connect to cloud resources using system-assigned managed identity.
+* Can be upgraded to use secure settings.
+
+To deploy Azure IoT Operations with test settings, you can use the steps in [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces](../get-started-end-to-end-sample/quickstart-deploy.md). Or, to deploy with test settings on AKS Edge Essentials or K3s on Ubuntu, follow the secure settings deployment articles and stop at the optional secure settings steps.
+
+If you want to upgrade your Azure IoT Operations instance to use secure settings, follow the steps in [Enable secure settings](./howto-enable-secure-settings.md).
+
+### Secure settings deployment
+
+A deployment with secure settings enabled:
+
+* Includes the steps to enable secrets and user-assignment managed identity, which are important capabilities for developing a production-ready scenario. Secrets are used whenever Azure IoT Operations components connect to a resource outside of the cluster; for example, an OPC UA server or a dataflow endpoint.
+
+To deploy Azure IoT Operations with secure settings, follow these articles:
+
+1. Start with [Prepare your Azure Arc-enabled Kubernetes cluster](./howto-prepare-cluster.md) to configure and Arc-enable your cluster.
+1. Then, [Deploy Azure IoT Operations Preview](./howto-deploy-iot-operations.md).
+
+## Required permissions
+
+The following table described Azure IoT Operations deployment and management tasks that require elevated permissions. For information about assigning roles to users, see [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
+
+| Task | Required permission | Comments |
+| - | - | -- |
+| Deploy Azure IoT Operations | **Contributor** role at the subscription level. | |
+| Register resource providers | **Contributor** role at the subscription level. | Only required to do once per subscription. |
+| Create a schema registry. | **Microsoft/Authorization/roleAssignments/write** permissions at the resource group level. | |
+| Create secrets in Key Vault | **Key Vault Secrets Officer** role at the resource level. | Only required for secure settings deployment. |
+| Enable resource sync rules on an Azure IoT Operations instance | **Microsoft/Authorization/roleAssignments/write** permissions at the resource group level. | Resource sync rules are disabled by default, but can be enabled during instance creation. |
+
+If you use the Azure CLI to assign roles, use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command to give permissions. For example, `az role assignment create --assignee sp_name --role "Role Based Access Control Administrator" --scope subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyResourceGroup`
+
+If you use the Azure portal to assign privileged admin roles to a user or principal, you're prompted to restrict access using conditions. For this scenario, select the **Allow user to assign all roles** condition in the **Add role assignment** page.
++
+## Included components
+
+Azure IoT Operations is a suite of data services that run on Azure Arc-enabled edge Kubernetes clusters. It also depends on a set of support services that are also installed as part of a deployment.
+
+* Azure IoT Operations core services
+ * Dataflows
+ * MQTT Broker
+ * Connector for OPC UA
+ * Akri
+
+* Installed dependencies
+ * [Azure Device Registry](../discover-manage-assets/overview-manage-assets.md#store-assets-as-azure-resources-in-a-centralized-registry)
+ * [Azure Container Storage enabled by Azure Arc](/azure/azure-arc/container-storage/overview)
+ * Secret Sync Controller
+
+## Organize instances by using sites
+
+Azure IoT Operations supports Azure Arc sites for organizing instances. A _site_ is a cluster resource in Azure like a resource group, but sites typically group instances by physical location and make it easier for OT users to locate and manage assets. An IT administrator creates sites and scopes them to a subscription or resource group. Then, any Azure IoT Operations deployed to an Arc-enabled cluster is automatically collected in the site associated with its subscription or resource group
+
+For more information, see [What is Azure Arc site manager (preview)?](/azure/azure-arc/site-manager/overview)
+
+## Domain allowlist for Azure IoT Operations
+
+If you use enterprise firewalls or proxies to manage outbound traffic, add the following endpoints to your domain allowlist before deploying Azure IoT Operations Preview.
+
+Additionally, allow the Arc-enabled Kubernetes endpoints in [Azure Arc network requirements](/azure/azure-arc/network-requirements-consolidated).
+
+```text
+nw-umwatson.events.data.microsoft.com
+dc.services.visualstudio.com
+github.com
+self.events.data.microsoft.com
+mirror.enzu.com
+ppa.launchpadcontent.net
+msit-onelake.pbidedicated.windows.net
+gcr.io
+adhs.events.data.microsoft.com
+gbl.his.arc.azure.cn
+onegetcdn.azureedge.net
+graph.windows.net
+pas.windows.net
+agentserviceapi.guestconfiguration.azure.com
+aka.ms
+api.segment.io
+download.microsoft.com
+raw.githubusercontent.com
+go.microsoft.com
+global.metrics.azure.eaglex.ic.gov
+gbl.his.arc.azure.us
+packages.microsoft.com
+global.metrics.azure.microsoft.scloud
+www.powershellgallery.com
+k8s.io
+guestconfiguration.azure.com
+ods.opinsights.azure.com
+vault.azure.net
+googleapis.com
+quay.io
+handler.control.monitor.azure.com
+pkg.dev
+docker.io
+prod.hot.ingestion.msftcloudes.com
+docker.com
+prod.microsoftmetrics.com
+oms.opinsights.azure.com
+azureedge.net
+monitoring.azure.com
+blob.core.windows.net
+azurecr.io
+```
+
+## Next steps
+
+[Prepare your Azure Arc-enabled Kubernetes cluster](./howto-prepare-cluster.md) to configure and Arc-enable a cluster for Azure IoT Operations.
iot-operations Concept Akri Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/concept-akri-architecture.md
- Title: Akri services architecture
-description: Understand the key components in the Akri services architecture and how they relate to each other. Includes some information about the CNCF version of Akri
-----
- - ignite-2023
Previously updated : 05/13/2024-
-# CustomerIntent: As an industrial edge IT or operations user, I want to understand the key components in the Akri services architecture so that I understand how it works to enable device and asset discovery for my edge solution.
--
-# Akri services architecture
--
-This article helps you understand the architecture of the Akri services. After you learn about the core components of the Akri services, you can use them to detect devices and assets, and add them to your Kubernetes cluster.
-
-The Akri services are a Microsoft-managed commercial version of [Akri](https://docs.akri.sh/), an open-source Cloud Native Computing Foundation (CNCF) project.
-
-## Core components
-
-The Akri services consist of the following five components:
--- **Akri configuration** is a custom resource where you name a device. This configuration tells the Akri services what kind of devices to look for.-- **Akri instance** is a custom resource that tracks the availability and usage of a device. Each Akri instance represents a leaf device.-- **Akri discovery handlers** look for the configured device and inform the agent about discovered devices.-- **Akri agent** creates the Akri instance custom resource.-- **Akri controller** helps you to use a configured device. The controller sees each Akri instance and deploys a broker pod that knows how to connect to and use the resource.--
-## Custom resource definitions
-
-A custom resource definition (CRD) is a Kubernetes API extension that lets you define new object types. There are two Akri services CRDs:
--- Configuration-- Instance-
-### Akri configuration CRD
-
-The configuration CRD configures the Akri services. You create configurations that describe the resources to discover and the pod to deploy on a node that discovers a resource. To learn more, see [Akri configuration CRD](https://github.com/project-akri/akri/blob/main/deployment/helm/crds/akri-configuration-crd.yaml). The CRD schema specifies the settings all configurations must have, including the following settings:
--- The discovery protocol for finding resources. For example, ONVIF or udev.-- `spec.capacity` that defines the maximum number of nodes that can schedule workloads on this resource.-- `spec.brokerPodSpec` that defines the broker pod to schedule for each of these reported resources.-- `spec.instanceServiceSpec` that defines the service that provides a single stable endpoint to access each individual resource's set of broker pods.-- `spec.configurationServiceSpec` that defines the service that provides a single stable endpoint to access the set of all brokers for all resources associated with the configuration.-
-### Akri instance CRD
-
-Each Akri instance represents an individual resource that's visible to the cluster. For example, if there are five IP cameras visible to the cluster, there are five instances. The instance CRD enables Akri services coordination and resource sharing. These instances store internal state and aren't intended for you to edit. To learn more, see [Resource sharing in-depth](https://docs.akri.sh/architecture/resource-sharing-in-depth).
-
-## Agent
-
-The Akri agent implements [Kubernetes Device-Plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/) for discovered resources. The Akri Agent performs the following tasks:
--- It watches for configuration changes to determine the resources to search for.-- It monitors resource availability to determine what resources to advertise. In an edge environment, resource availability changes often.-- It informs Kubernetes of any changes to resource health and availability.-
-These tasks, combined with the state stored in the instance, enable multiple nodes to share a resource while respecting the limits defined by the `spec.capacity` setting.
-
-To learn more, see [Agent in-depth](https://docs.akri.sh/architecture/agent-in-depth).
-
-## Discovery handlers
-
-A discovery handler finds devices. Examples of device include:
--- USB sensors connected to nodes.-- GPUs embedded in nodes.-- IP cameras on the network.-
-The discovery handler reports all discovered devices to the agent. There are often protocol implementations for discovering a set of devices, whether a network protocol like OPC UA or a proprietary protocol. Discovery handlers implement the `DiscoveryHandler` service defined in [`discovery.proto`](https://github.com/project-akri/akri/blob/main/discovery-utils/proto/discovery.proto). A discovery handler is required to register with the agent, which hosts the `Registration` service defined in [`discovery.proto`](https://github.com/project-akri/akri/blob/main/discovery-utils/proto/discovery.proto).
-
-To learn more, see [Custom Discovery Handlers](https://docs.akri.sh/development/handler-development).
-
-## Controller
-
-The goals of the Akri controller are to:
--- Create or delete the pods and services that enable resource availability.-- Ensure that instances are aligned to the cluster state at any given moment.-
-To achieve these goals, the controller:
--- Watches out for instance changes to determine what pods and services should exist.-- Watches for nodes that are contained in instances that no longer exist.-
-These tasks enable the Akri controller to ensure that protocol brokers and Kubernetes services are running on all nodes and exposing the desired resources, while respecting the limits defined by the `spec.capacity` setting.
-
-For more information, see the documentation for [Controller In-depth](https://docs.akri.sh/architecture/controller-in-depth).
iot-operations Concept Assets Asset Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/concept-assets-asset-endpoints.md
+
+ Title: Understand assets and asset endpoint profiles
+description: Understand the Azure Device Registry resources that define assets and asset endpoint profiles.
++
+#
+ Last updated : 08/27/2024+
+# CustomerIntent: As an industrial edge IT or operations user, I want to understand the types of Azure resources that are created by Azure Device Registry to manage assets.
++
+# Define assets and asset endpoints
++
+Azure IoT Operations Preview uses Azure resources called assets and asset endpoints to connect and manage components of your industrial edge environment.
+
+Historically, in industrial edge environments the term *asset* refers to any item of value that you want to manage, monitor, and collect data from. An asset can be a machine, a software component, an entire system, or a physical object of value such as a field of crops or a building. These assets are examples that exist in manufacturing, retail, energy, healthcare, and other sectors.
+
+In Azure IoT Operations, you can create an *asset* in the cloud to represent an asset in your industrial edge environment. An Azure IoT Operations asset can emit data. Southbound connectors in your IoT Operations instance collect these data and publish them to an MQTT topic where they can be picked up and routed by dataflows.
+
+## Cloud and edge resources
+
+Azure Device Registry Preview registers assets and asset endpoints as Azure resources, enabled by Azure Arc. Device registry also syncs these cloud resources to the edge as [Kubernetes custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/).
+
+You can create, edit, and delete asset endpoints and assets by using the Azure IoT Operations CLI extension or the operations experience web UI. For more information, see [Manage asset configurations remotely](./howto-manage-assets-remotely.md).
+
+## Asset endpoints
+
+Before you can create an asset, you need to define an asset endpoint profile. An *asset endpoint* is a profile that describes southbound edge connectivity information for one or more assets.
+
+Currently, the only southbound connector available in Azure IoT Operations is the connector for OPC UA. Asset endpoints are configurations for the connector for OPC UA that tell it how to connect to an OPC UA server. For more information, see [What is the connector for OPC UA?](./overview-opcua-broker.md)
+
+The following table highlights some important properties that are included in an asset endpoint definition.
+
+| Property | Description |
+| -- | -- |
+| **Cluster** or **Location** | The custom location or cluster name for the Azure IoT Operations instance where the asset endpoint custom resource will be created. In the operations experience, this property is set by choosing the instance before you create the asset endpoint. |
+| **Target address** | The local IP address of the OPC UA server. |
+| **User authentication** | Can be anonymous authentication or username/password authentication. For username/password authentication, provide pointers to where both values are stored as secrets in Azure Key Vault. |
+
+## Assets
+
+An *asset* is a logical entity that represents a device or component in the cloud as an Azure Resource Manager resource and at the edge as a Kubernetes custom resource. When you create an asset, you can define its metadata and the datapoints (also called tags) and events that it emits.
+
+Currently, an asset in Azure IoT Operations can be anything that connects to an OPC UA server.
+
+When you define an asset using either the operations experience or Azure IoT Operations CLI, you can configure *tags* and *events* for each asset.
+
+The following table highlights some important properties that are included in an asset definition.
+
+| Property | Description |
+| -- | -- |
+| **Cluster** or **Location** | The custom location or cluster name for the Azure IoT Operations instance where the asset custom resource will be created. In the operations experience, this property is set by choosing the instance before you create the asset endpoint. |
+| **Asset endpoint** | The name of the asset endpoint that this asset connects to. |
+| **Custom attributes** | Metadata about the asset that you can provide using any key=value pairs that make sense for your environment. |
+| **Tag** or **Data** | A set of key=value pairs that define a data point from the asset. |
+
+### Asset tags
+
+A *tag* is a description of a data point that can be collected from an asset. OPC UA tags provide real-time or historical data about an asset. Tags include the following properties:
+
+| Property | Description |
+| -- | -- |
+| **Node Id** | The [OPC UA node ID](https://opclabs.doc-that.com/files/onlinedocs/QuickOpc/Latest/User%27s%20Guide%20and%20Reference-QuickOPC/OPC%20UA%20Node%20IDs.html) that represents a location on the OPC UA server where the asset emits this data point. |
+| **Name** | A friendly name for the tag. |
+| **Queue size** | How much sampling data to collect before publishing it. Default: `1`. |
+| **Observability mode** | Accepted values: `none`, `gauge`, `counter`, `histogram`, `log`. |
+| **Sampling interval** | The rate in milliseconds that the OPC UA server should sample the data source for changes. Default: `500`. |
+
+### Asset events
+
+An *event* is a notification from an OPC UA server that can inform you about state changes to your asset. Events include the following properties:
+
+| Property | Description |
+| -- | -- |
+| **Event notifier** | The [OPC UA node ID](https://opclabs.doc-that.com/files/onlinedocs/QuickOpc/Latest/User%27s%20Guide%20and%20Reference-QuickOPC/OPC%20UA%20Node%20IDs.html) that represents a location on the OPC UA server where the server emits this event. |
+| **Name** | A friendly name for the event. |
+| **Observability mode** | Accepted values: `none`, `log`. |
+| **Queue size** | How much event data to collect before publishing it. Default: `1`. |
iot-operations Concept Opcua Message Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/concept-opcua-message-format.md
The connector for OPC UA publishes messages from OPC UA servers to the MQTT brok
The payload of an OPC UA message is a JSON object that contains the telemetry data from the OPC UA server. The following example shows the payload of a message from the sample thermostat asset used in the quickstarts. Use the following command to subscribe to messages in the `azure-iot-operations/data` topic: ```console
-mosquitto_sub --host aio-mq-dmqtt-frontend --port 8883 --topic "azure-iot-operations/data/#" -v --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+mosquitto_sub --host aio-broker --port 18883 --topic "azure-iot-operations/data/#" -v --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
``` The output from the previous command looks like the following example:
Client $server-generated/05a22b94-c5a2-4666-9c62-837431ca6f7e received PUBLISH (
The headers in the messages published by the connector for OPC UA are based on the [CloudEvents specification for OPC UA](https://github.com/cloudevents/spec/blob/main/cloudevents/extensions/opcua.md). The headers from an OPC UA message become user properties in a message published to the MQTT broker. The following example shows the user properties of a message from the sample thermostat asset used in the quickstarts. Use the following command to subscribe to messages in the `azure-iot-operations/data` topic: ```console
-mosquitto_sub --host aio-mq-dmqtt-frontend --port 8883 --topic "azure-iot-operations/data/#" -V mqttv5 -F %P --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+mosquitto_sub --host aio-broker --port 18883 --topic "azure-iot-operations/data/#" -V mqttv5 -F %P --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
``` The output from the previous command looks like the following example:
iot-operations Howto Autodetect Opcua Assets Using Akri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-autodetect-opcua-assets-using-akri.md
- Title: Discover OPC UA data sources using the Akri services
-description: How to discover and configure OPC UA data sources at the edge automatically by using the Akri services
---- Previously updated : 05/15/2024-
-# CustomerIntent: As an industrial edge IT or operations user, I want to discover and create OPC UA data sources in my industrial edge environment so that I can reduce manual configuration overhead.
--
-# Discover OPC UA data sources using the Akri services
--
-In this article, you learn how to discover OPC UA data sources automatically. After you deploy Azure IoT Operations Preview, you configure the Akri services to discover OPC UA data sources at the edge. The Akri services create custom resources in your Kubernetes cluster that represent the data sources it discovers. The ability to discover OPC UA data sources removes the need to [manually configure them by using the operations experience web UI](howto-manage-assets-remotely.md).
-
-> [!IMPORTANT]
-> Currently, you can't use Azure Device Registry to manage the assets that the Akri services discover and create.
-
-The Akri services enable you to detect and create assets in the address space of an OPC UA server. The OPC UA asset detection generates `AssetType` and `Asset` custom resources for [OPC UA Device Integration (DI) specification](https://reference.opcfoundation.org/DI/v104/docs/) compliant assets.
-
-## Prerequisites
--- Install Azure IoT Operations Preview. To install Azure IoT Operations for demonstration and exploration purposes, see [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md).-- Verify that the Akri services pods are properly configured by running the following command:-
- ```bash
- kubectl get pods -n azure-iot-operations
- ```
-
- The output includes a line that shows the Akri agent and discovery pods are running:
-
- ```output
- NAME READY STATUS RESTARTS AGE
- aio-akri-agent-daemonset-hwpc7 1/1 Running 0 17mk0s
- aio-opc-asset-discovery-wzlnj 1/1 Running 0 8m28s
- ```
-
-## Configure the OPC UA discovery handler
-
-To configure the OPC UA discovery handler for asset detection, create a YAML configuration file that contains the values described in this section:
-
-| Name | Mandatory | Datatype | Default | Comment |
-| - | | -- | - | - |
-| `EndpointUrl` | true | String | null | The OPC UA endpoint URL to use for asset discovery |
-| `AutoAcceptUntrustedCertificates` | true ┬╣ | Boolean | false | Should the client autoaccept untrusted certificates? A certificate can only be autoaccepted as trusted if no nonsuppressible errors occurred during chain validation. For example, a certificate with incomplete chain isn't accepted. |
-| `UseSecurity` | true ┬╣ | Boolean | true | Should the client use a secure connection? |
-| `UserName` | false | String | null | The username for user authentication. ┬▓ |
-| `Password` | false | String | null | The password for user authentication. ┬▓ |
-
-┬╣ The current version of the discovery handler only supports `UseSecurity=false` and requires `autoAcceptUntrustedCertificates=true`.
-┬▓ A temporary implementation until the Akri services can pass Kubernetes secrets.
-
-The following example demonstrates discovery of an OPC PLC server. You can add the asset parameters for multiple OPC PLC servers.
-
-1. To create the YAML configuration file, copy and paste the following content into a new file, and save it as `opcua-configuration.yaml`:
-
- If you're using the simulated PLC server that was deployed with the Azure IoT Operations Quickstart, you don't need to change the `endpointUrl`. If you have your own OPC UA servers running or are using the simulated PLC servers deployed on Azure, add in your endpoint URL accordingly. Discovery endpoint URLs look like `opc.tcp://<FQDN>:50000/`. To find the FQDNs of your OPC PLC servers, go to your deployment in the Azure portal. For each server, copy and paste the **FQDN** value into your endpoint URLs.
-
- ```yaml
- apiVersion: akri.sh/v0
- kind: Configuration
- metadata:
- name: aio-akri-opcua-asset
- spec:
- discoveryHandler:
- name: opcua-asset
- discoveryDetails: "opcuaDiscoveryMethod:\n - asset:\n endpointUrl: \" opc.tcp://opcplc-000000:50000\"\n useSecurity: false\n autoAcceptUntrustedCertificates: true\n"
- brokerProperties: {}
- capacity: 1
- ```
-
-1. To apply the configuration, run the following command:
-
- ```bash
- kubectl apply -f opcua-configuration.yaml -n azure-iot-operations
- ```
-
-> [!TIP]
-> In a default Azure IoT Operations deployment, the OPC UA discovery handler is already configured to discover the simulated PLC server. If you want to discover assets connected to additional OPC UA servers, you can add them to the configuration file.
-
-## Verify the configuration
-
-To confirm that the asset discovery container is configured and running:
-
-1. Use the following command to check the pod logs:
-
- ```bash
- kubectl logs <insert aio-opc-asset-discovery pod name> -n azure-iot-operations
- ```
-
- A log from the `aio-opc-asset-discovery` pod indicates after a few seconds that the discovery handler registered itself with the Akri
-
- ```2024-08-01T15:04:12.874Z aio-opc-asset-discovery-4nsgs - Akri OPC UA Asset Discovery (1.0.0-preview-20240708+702c5cafeca2ea49fec3fb4dc6645dd0d89016ee) is starting with the process id: 1
- 2024-08-01T15:04:12.948Z aio-opc-asset-discovery-4nsgs - OPC UA SDK 1.5.374.70 from 07/20/2024 07:37:16
- 2024-08-01T15:04:12.973Z aio-opc-asset-discovery-4nsgs - OPC UA SDK informational version: 1.5.374.70+1ee3beb87993019de4968597d17cb54d5a4dc3c8
- 2024-08-01T15:04:12.976Z aio-opc-asset-discovery-4nsgs - Akri agent registration enabled: True
- 2024-08-01T15:04:13.475Z aio-opc-asset-discovery-4nsgs - Hosting starting
- 2024-08-01T15:04:13.547Z aio-opc-asset-discovery-4nsgs - Overriding HTTP_PORTS '8080' and HTTPS_PORTS ''. Binding to values defined by URLS instead 'http://+:8080'.
- 2024-08-01T15:04:13.774Z aio-opc-asset-discovery-4nsgs - Now listening on: http://:8080
- 2024-08-01T15:04:13.774Z aio-opc-asset-discovery-4nsgs - Application started. Press Ctrl+C to shut down.
- 2024-08-01T15:04:13.774Z aio-opc-asset-discovery-4nsgs - Hosting environment: Production
- 2024-08-01T15:04:13.774Z aio-opc-asset-discovery-4nsgs - Content root path: /app
- 2024-08-01T15:04:13.774Z aio-opc-asset-discovery-4nsgs - Hosting started
- 2024-08-01T15:04:13.881Z aio-opc-asset-discovery-4nsgs - Registering with Agent as HTTP endpoint using own IP from the environment variable POD_IP: 10.42.0.245
- 2024-08-01T15:04:14.875Z aio-opc-asset-discovery-4nsgs - Registered with the Akri agent with name opcua-asset for http://10.42.0.245:8080 with type Network and shared True
- 2024-08-01T15:04:14.877Z aio-opc-asset-discovery-4nsgs - Successfully re-registered OPC UA Asset Discovery Handler with the Akri agent
- 2024-08-01T15:04:14.877Z aio-opc-asset-discovery-4nsgs - Press CTRL+C to exit
- ```
-
- After about a minute, the Akri services issue the first discovery request based on the configuration:
-
- ```output
- 2024-08-01T15:04:15.280Z aio-opc-asset-discovery-4nsgs [opcuabroker@311 SpanId:6d3db9751eebfadc, TraceId:e5594cbaf3993749e92b45c88c493377, ParentId:0000000000000000 ConnectionId:0HN5I7CQJPJL0 RequestPath:/v0.DiscoveryHandler/Discover RequestId:0HN5I7CQJPJL0:00000001] - Reading message.
- 2024-08-01T15:04:15.477Z aio-opc-asset-discovery-4nsgs [opcuabroker@311 SpanId:6d3db9751eebfadc, TraceId:e5594cbaf3993749e92b45c88c493377, ParentId:0000000000000000 ConnectionId:0HN5I7CQJPJL0 RequestPath:/v0.DiscoveryHandler/Discover RequestId:0HN5I7CQJPJL0:00000001] - Received discovery request from ipv6:[::ffff:10.42.0.241]:48638
- 2024-08-01T15:04:15.875Z aio-opc-asset-discovery-4nsgs [opcuabroker@311 SpanId:6d3db9751eebfadc, TraceId:e5594cbaf3993749e92b45c88c493377, ParentId:0000000000000000 ConnectionId:0HN5I7CQJPJL0 RequestPath:/v0.DiscoveryHandler/Discover RequestId:0HN5I7CQJPJL0:00000001] - Start asset discovery
- 2024-08-01T15:04:15.882Z aio-opc-asset-discovery-4nsgs [opcuabroker@311 SpanId:6d3db9751eebfadc, TraceId:e5594cbaf3993749e92b45c88c493377, ParentId:0000000000000000 ConnectionId:0HN5I7CQJPJL0 RequestPath:/v0.DiscoveryHandler/Discover RequestId:0HN5I7CQJPJL0:00000001] - Discovering OPC UA opc.tcp://opcplc-000000:50000 using Asset Discovery
- 2024-08-01T15:04:15.882Z aio-opc-asset-discovery-4nsgs [opcuabroker@311 SpanId:6d3db9751eebfadc, TraceId:e5594cbaf3993749e92b45c88c493377, ParentId:0000000000000000 ConnectionId:0HN5I7CQJPJL0 RequestPath:/v0.DiscoveryHandler/Discover RequestId:0HN5I7CQJPJL0:00000001] - Selected AutoAcceptUntrustedCertificates mode: False
- ```
-
- After the discovery is complete, the discovery handler sends the result back to the Akri services to create an Akri instance custom resource with asset information and observable variables. The discovery handler repeats the discovery every 10 minutes to detect any changes on the server.
-
-1. To view the discovered Akri instances, run the following command:
-
- ```bash
- kubectl get akrii -n azure-iot-operations
- ```
-
- The output from the previous command looks like the following example. You might need to wait for a few seconds for the Akri instance to be created:
-
- ```output
- NAME CONFIG SHARED NODES AGE
- akri-opcua-asset-dbdef0 akri-opcua-asset true ["k3d-k3s-default-server-0"] 24h
- ```
-
- The connector for OPC UA supervisor watches for new Akri instance custom resources of type `opc-ua-asset`, and generates the initial asset types and asset custom resources for them. You can modify asset custom resources by adding settings such as extended publishing for more data points, or connector for OPC UA observability settings.
-
-1. To confirm that the Akri instance properly connected to the connector for OPC UA, run the following command. Replace the placeholder with the name of the Akri instance that was included in the output of the previous command:
-
- ```bash
- kubectl get akrii <AKRI_INSTANCE_NAME> -n azure-iot-operations -o json
- ```
-
- The command output includes a section that looks like the following example. The snippet shows the Akri instance `brokerProperties` values and confirms that the connector for OPC UA is connected.
-
- ```json
- "spec": {
-
- "brokerProperties": {
- "ApplicationUri": "Boiler #2",
- "AssetEndpointProfile": "{\"spec\":{\"uuid\":\"opc-ua-broker-opcplc-000000-azure-iot-operation\"……
- ```
iot-operations Howto Configure Opc Plc Simulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-configure-opc-plc-simulator.md
- Title: Configure an OPC PLC simulator
-description: How to configure the OPC PLC simulator to work with the connector for OPC UA. The simulator generates sample data for testing and development purposes.
---- Previously updated : 05/16/2024-
-# CustomerIntent: As a developer, I want to configure an OPC PLC simulator in my industrial edge environment to test the process of managing OPC UA assets connected to the simulator.
--
-# Configure the OPC PLC simulator to work with the connector for OPC UA
--
-In this article, you learn how to configure and connect the OPC PLC simulator. The simulator simulates an OPC UA server with multiple nodes that generate random data and anomalies. You can configure user defined nodes. The OPC UA simulator lets you test the process of managing OPC UA assets with the [operations experience](howto-manage-assets-remotely.md) web UI or [the Akri services](overview-akri.md).
-
-## Prerequisites
-
-A deployed instance of Azure IoT Operations Preview. To deploy Azure IoT Operations for demonstration and exploration purposes, see [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md). If you deploy Azure IoT Operations as described, the installation includes the OPC PLC simulator.
-
-## Deploy the OPC PLC simulator
-
-This section shows how to deploy the OPC PLC simulator if you didn't include it when you first deployed Azure IoT Operations.
-
-The following step lowers the security level for the OPC PLC so that it accepts connections from the connector for OPC UA or any client without an explicit peer certificate trust operation.
-
-> [!IMPORTANT]
-> Don't use the following example in production, use it for simulation and test purposes only.
-
-Run the following code to update the connector for OPC UA deployment and apply the new settings:
-
-```bash
-az k8s-extension update \
- --version 0.3.0-preview \
- --name opc-ua-broker \
- --release-train preview \
- --cluster-name <cluster-name> \
- --resource-group <azure-resource-group> \
- --cluster-type connectedClusters \
- --auto-upgrade-minor-version false \
- --config opcPlcSimulation.deploy=true \
- --config opcPlcSimulation.autoAcceptUntrustedCertificates=true
-```
-
-The OPC PLC simulator runs as a separate pod in the `azure-iot-operations` namespace. The pod name looks like `opcplc-000000-7b6447f99c-mqwdq`.
-
-## Configure mutual trust between the connector for OPC UA and the OPC PLC
-
-To learn more about mutual trust in OPC UA, see [OPC UA certificates infrastructure for the connector for OPC UA](overview-opcua-broker-certificates-management.md).
-
-The application instance certificate of the OPC PLC simulator is a self-signed certificate managed by [cert-manager](https://cert-manager.io/) and stored in the `aio-opc-ua-opcplc-default-application-cert-000000` Kubernetes secret.
-
-To configure mutual trust between the connector for OPC UA and the OPC PLC simulator:
-
-1. Get the certificate and push it to Azure Key Vault:
-
- ```bash
- kubectl -n azure-iot-operations get secret aio-opc-ua-opcplc-default-application-cert-000000 -o jsonpath='{.data.tls\.crt}' | \
- base64 -d | \
- xargs -0 -I {} \
- az keyvault secret set \
- --name "opcplc-crt" \
- --vault-name <your-azure-key-vault-name> \
- --value {} \
- --content-type application/x-pem-file
- ```
-
-1. Add the certificate to the `aio-opc-ua-broker-trust-list` custom resource in the cluster. Use a Kubernetes client such as `kubectl` to configure the `opcplc.crt` secret in the `SecretProviderClass` object array in the cluster.
-
- The following example shows a complete `SecretProviderClass` custom resource that contains the simulator certificate in a PEM encoded file with the .crt extension:
-
- ```yml
- apiVersion: secrets-store.csi.x-k8s.io/v1
- kind: SecretProviderClass
- metadata:
- name: aio-opc-ua-broker-trust-list
- namespace: azure-iot-operations
- spec:
- provider: azure
- parameters:
- usePodIdentity: 'false'
- keyvaultName: <your-azure-key-vault-name>
- tenantId: <your-azure-tenant-id>
- objects: |
- array:
- - |
- objectName: opcplc-crt
- objectType: secret
- objectAlias: opcplc.crt
- ```
-
- > [!NOTE]
- > The time it takes to project Azure Key Vault certificates into the cluster depends on the configured polling interval.
-
-The connector for OPC UA trust relationship with the OPC PLC simulator is now established and you can create an `AssetEndpointProfile` to connect to your OPC PLC simulator.
-
-## Optionally configure your `AssetEndpointProfile` without mutual trust established
-
-Optionally, you can configure an asset endpoint profile without establishing mutual trust between the connector for OPC UA and the OPC PLC simulator. If you understand the risks, you can turn off authentication for testing purposes.
-
-> [!CAUTION]
-> Don't configure for no authentication in production or pre-production environments. Exposing your cluster to the internet without authentication can lead to unauthorized access and even DDOS attacks.
-
-To allow your asset endpoint profile to connect to an OPC PLC server without establishing mutual trust, use the `additionalConfiguration` setting to modify the `AssetEndpointProfile` configuration.
-
-Patch the asset endpoint with `autoAcceptUntrustedServerCertificates=true`:
-
-```bash
-ENDPOINT_NAME=<name-of-you-endpoint-here>
-kubectl patch AssetEndpointProfile $ENDPOINT_NAME \
--n azure-iot-operations \type=merge \--p '{"spec":{"additionalConfiguration":"{\"applicationName\":\"'"$ENDPOINT_NAME"'\",\"security\":{\"autoAcceptUntrustedServerCertificates\":true}}"}}'
-```
-
-## Related content
--- [OPC UA certificates infrastructure for the connector for OPC UA](overview-opcua-broker-certificates-management.md)-- [Autodetect assets using the Akri services](howto-autodetect-opcua-assets-using-akri.md)
iot-operations Howto Configure Opcua Authentication Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-configure-opcua-authentication-options.md
Previously updated : 05/16/2024 Last updated : 09/16/2024 # CustomerIntent: As a user in IT, operations, or development, I want to configure my OPC UA industrial edge environment with custom OPC UA user authentication options to keep it secure and work with my solution.
Last updated 05/16/2024
In this article, you learn how to configure OPC UA user authentication options. These options provide more control over how the connector for OPC UA authenticates with OPC UA servers in your environment.
+Currently, the connector for OPC UA supports user authentication with a username and password. You store and manage the username and password values in Azure Key Vault. Azure IoT Operations then synchronizes these values to your Kubernetes cluster where you can use them securely.
+ To learn more, see [OPC UA applications - user authentication](https://reference.opcfoundation.org/Core/Part2/v105/docs/5.2.3). ## Prerequisites
-A deployed instance of Azure IoT Operations Preview. To deploy Azure IoT Operations for demonstration and exploration purposes, see [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md).
+A deployed instance of Azure IoT Operations Preview with [Manage Synced Secrets](../deploy-iot-ops/howto-manage-secrets.md#manage-synced-secrets) enabled.
## Features supported
A deployed instance of Azure IoT Operations Preview. To deploy Azure IoT Operati
## Configure username and password authentication
-First, configure the secrets for the username and password in Azure Key Vault and project them into the connected cluster by using a `SecretProviderClass` object.
-
-1. Configure the username and password in Azure Key Vault. In the following example, use the `username` and `password` as secret references for the asset endpoint configuration in the operations experience web UI.
-
- Replace the placeholders for username and password with the credentials used to connect to the OPC UA server.
-
- To configure the username and password, run the following code:
-
- ```bash
- # Create username Secret in Azure Key Vault
- az keyvault secret set \
- --name "username" \
- --vault-name "<your-azure-key-vault-name>" \
- --value "<your-opc-ua-server-username>" \
- --content-type "text/plain"
-
- # Create password Secret in Azure Key Vault
- az keyvault secret set \
- --name "password" \
- --vault-name "<your-azure-key-vault-name>" \
- --value "<your-opc-ua-server-password>" \
- --content-type "text/plain"
- ```
-
-1. Configure the `aio-opc-ua-broker-user-authentication` custom resource in the cluster. Use a Kubernetes client such as `kubectl` to configure the `username` and `password` secrets in the `SecretProviderClass` object array in the cluster.
-
- The following example shows a complete `SecretProviderClass` custom resource after you add the secrets:
-
- ```yml
- apiVersion: secrets-store.csi.x-k8s.io/v1
- kind: SecretProviderClass
- metadata:
- name: aio-opc-ua-broker-user-authentication
- namespace: azure-iot-operations
- spec:
- provider: azure
- parameters:
- usePodIdentity: 'false'
- keyvaultName: <azure-key-vault-name>
- tenantId: <azure-tenant-id>
- objects: |
- array:
- - |
- objectName: username
- objectType: secret
- objectVersion: ""
- - |
- objectName: password
- objectType: secret
- objectVersion: ""
- ```
-
- > [!NOTE]
- > The time it takes to project Azure Key Vault certificates into the cluster depends on the configured polling interval.
-
-In the operations experience, select the **Username & password** option when you configure the Asset endpoint. Enter the names of the references that store the username and password values. In this example, the names of the references are `username` and `password`.
+To configure the secrets for the *username* and *password* values in the [operations experience](https://iotoperations.azure.com) web UI:
+
+1. Navigate to your list of asset endpoints:
+
+ :::image type="content" source="media/howto-configure-opcua-authentication-options/asset-endpoint-list.png" alt-text="Screenshot that shows the list of asset endpoints.":::
+
+1. Select **Create asset endpoint**.
+
+1. Select **Username password** as the authentication mode:
+
+ :::image type="content" source="media/howto-configure-opcua-authentication-options/authentication-mode.png" alt-text="Screenshot that shows the username and password authentication mode selected.":::
+
+1. Enter a synced secret name and then select the username and password references from the linked Azure Key Vault:
+
+ :::image type="content" source="media/howto-configure-opcua-authentication-options/select-from-key-vault.png" alt-text="Screenshot that shows the username and password references from Azure Key Vault.":::
+
+ > [!TIP]
+ > You have the option to create new secrets in Azure Key Vault if you haven't already added them.
+
+1. Select **Apply**.
iot-operations Howto Configure Opcua Certificates Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-configure-opcua-certificates-infrastructure.md
Previously updated : 05/15/2024 Last updated : 09/16/2024 # CustomerIntent: As an industrial edge IT or operations user, I want to to understand how to manage the OPC UA Certificates in the context of the connector for OPC UA.
To learn more, see [OPC UA certificates infrastructure for the connector for OPC
## Prerequisites
-A deployed instance of Azure IoT Operations Preview. To deploy Azure IoT Operations for demonstration and exploration purposes, see [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md).
+A deployed instance of Azure IoT Operations Preview. To deploy Azure IoT Operations for demonstration and exploration purposes, see [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md).
## Configure a self-signed application instance certificate
To connect to an asset, first you need to establish the application authenticati
1. Save the OPC UA server's application instance certificate in Azure Key Vault as a secret.
+ # [Bash](#tab/bash)
+ For a DER encoded certificate in a file such as *./my-server.der*, run the following command:
- ```azcli
- # Upload my-server.der OPC UA server's certificate as secret to Azure Key Vault
+ ```bash
+ # Upload my-server.der OPC UA server certificate as secret to Azure Key Vault
az keyvault secret set \
- --name "my-server-der" \
+ --name my-server-der \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server.der \
+ --file my-server.der \
--encoding hex \ --content-type application/pkix-cert ``` For a PEM encoded certificate in a file such as *./my-server.crt*, run the following command:
- ```azcli
- # Upload my-server.crt OPC UA server's certificate as secret to Azure Key Vault
+ ```bash
+ # Upload my-server.crt OPC UA server certificate as secret to Azure Key Vault
az keyvault secret set \
- --name "my-server-crt" \
+ --name my-server-crt \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server.crt \
+ --file my-server.crt \
--encoding hex \ --content-type application/x-pem-file ```
+ # [PowerShell](#tab/powershell)
+
+ For a DER encoded certificate in a file such as *./my-server.der*, run the following command:
+
+ ```powershell
+ # Upload my-server.der OPC UA server certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-der `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server.der `
+ --encoding hex `
+ --content-type application/pkix-cert
+ ```
+
+ For a PEM encoded certificate in a file such as *./my-server.crt*, run the following command:
+
+ ```powershell
+ # Upload my-server.crt OPC UA server certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-crt `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server.crt `
+ --encoding hex `
+ --content-type application/x-pem-file
+ ```
+
+
+ 1. Configure the `aio-opc-ua-broker-trust-list` custom resource in the cluster. Use a Kubernetes client such as `kubectl` to configure the secrets, such as `my-server-der` or `my-server-crt`, in the `SecretProviderClass` object array in the cluster. The following example shows a complete `SecretProviderClass` custom resource that contains the trusted OPC UA server certificate in a DER encoded file:
- ```yml
+ ```yaml
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
To connect to an asset, first you need to establish the application authenticati
The following example shows a complete `SecretProviderClass` custom resource that contains the trusted OPC UA server certificate in a PEM encoded file with the .crt extension:
- ```yml
+ ```yaml
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
If your OPC UA server uses a certificate issued by a certificate authority (CA),
To trust a CA, complete the following steps:
-1. Get the CA certificate public key encode in DER or PEM format. These certificates are typically stored in files with either the .der or .crt extension. Get the CA's certificate revocation list (CRL). This list is typically in a file with the .crl. Check the documentation for your OPC UA server for details.
+1. Get the CA certificate public key encode in DER or PEM format. These certificates are typically stored in files with either the .der or .crt extension. Get the CA's CRL. This list is typically in a file with the .crl. Check the documentation for your OPC UA server for details.
1. Save the CA certificate and the CRL in Azure Key Vault as secrets.
+ # [Bash](#tab/bash)
+ For a DER encoded certificate in a file such as *./my-server-ca.der*, run the following commands:
- ```azcli
+ ```bash
# Upload CA certificate as secret to Azure Key Vault az keyvault secret set \
- --name "my-server-ca-der" \
+ --name my-server-ca-der \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server-ca.der \
+ --file my-server-ca.der \
--encoding hex \ --content-type application/pkix-cert # Upload the CRL as secret to Azure Key Vault az keyvault secret set \
- --name "my-server-crl" \
+ --name my-server-crl \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server-ca.crl \
+ --file my-server-ca.crl \
--encoding hex \ --content-type application/pkix-crl ``` For a PEM encoded certificate in a file such as *./my-server-ca.crt*, run the following commands:
- ```azcli
+ ```bash
# Upload CA certificate as secret to Azure Key Vault az keyvault secret set \
- --name "my-server-ca-crt" \
+ --name my-server-ca-crt \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server-ca.crt \
+ --file my-server-ca.crt \
--encoding hex \ --content-type application/x-pem-file # Upload the CRL as secret to Azure Key Vault az keyvault secret set \
- --name "my-server-crl" \
+ --name my-server-crl \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server-ca.crl \
+ --file my-server-ca.crl \
--encoding hex \ --content-type application/pkix-crl ```
+ # [PowerShell](#tab/powershell)
+
+ For a DER encoded certificate in a file such as *./my-server-ca.der*, run the following commands:
+
+ ```powershell
+ # Upload CA certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-ca-der `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server-ca.der `
+ --encoding hex `
+ --content-type application/pkix-cert
+
+ # Upload the CRL as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-crl `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server-ca.crl `
+ --encoding hex `
+ --content-type application/pkix-crl
+ ```
+
+ For a PEM encoded certificate in a file such as *./my-server-ca.crt*, run the following commands:
+
+ ```powershell
+ # Upload CA certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-ca-crt `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server-ca.crt `
+ --encoding hex `
+ --content-type application/x-pem-file
+
+ # Upload the CRL as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-crl `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server-ca.crl `
+ --encoding hex `
+ --content-type application/pkix-crl
+ ```
+
+
+ 1. Configure the `aio-opc-ua-broker-trust-list` custom resource in the cluster. Use a Kubernetes client such as `kubectl` to configure the secrets, such as `my-server-ca-der` or `my-server-ca-crt`, in the `SecretProviderClass` object array in the cluster. The following example shows a complete `SecretProviderClass` custom resource that contains the trusted OPC UA server certificate in a DER encoded file:
- ```yml
+ ```yaml
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
To trust a CA, complete the following steps:
The following example shows a complete `SecretProviderClass` custom resource that contains the trusted OPC UA server certificate in a PEM encoded file with the .crt extension:
- ```yml
+ ```yaml
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
To trust a CA, complete the following steps:
## Configure the issuer certificates list
-If your OPC UA server uses a certificate issued by a certificate authority (CA), but you don't want to trust all certificates issued by the CA, complete the following steps:
+If your OPC UA server uses a certificate issued by a CA, but you don't want to trust all certificates issued by the CA, complete the following steps:
1. Trust the OPC UA server's application instance certificate by following the first three steps in the previous section.
-1. Besides the certificate itself, connector for OPC UA needs the CA certificate to properly validate the issuer chain of the OPC UA server's certificate. Add the CA certificate and its certificate revocation list (CRL) to a separate list called `aio-opc-ua-broker-issuer-list`.
+1. Besides the certificate itself, the connector for OPC UA needs the CA certificate to properly validate the issuer chain of the OPC UA server's certificate. Add the CA certificate and its certificate revocation list (CRL) to a separate list called `aio-opc-ua-broker-issuer-list`.
1. Save the CA certificate and the CRL in Azure Key Vault as secrets.
+ # [Bash](#tab/bash)
+ For a DER encoded certificate in a file such as *./my-server-ca.der*, run the following commands:
- ```azcli
+ ```bash
# Upload CA certificate as secret to Azure Key Vault az keyvault secret set \
- --name "my-server-ca-der" \
+ --name my-server-ca-der \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server-ca.der \
+ --file my-server-ca.der \
--encoding hex \ --content-type application/pkix-cert # Upload the CRL as secret to Azure Key Vault az keyvault secret set \
- --name "my-server-crl" \
+ --name my-server-crl \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server-ca.crl \
+ --file my-server-ca.crl \
--encoding hex \ --content-type application/pkix-crl ``` For a PEM encoded certificate in a file such as *./my-server-ca.crt*, run the following commands:
- ```azcli
+ ```bash
# Upload CA certificate as secret to Azure Key Vault az keyvault secret set \
- --name "my-server-ca-crt" \
+ --name my-server-ca-crt \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server-ca.crt \
+ --file my-server-ca.crt \
--encoding hex \ --content-type application/x-pem-file # Upload the CRL as secret to Azure Key Vault az keyvault secret set \
- --name "my-server-crl" \
+ --name my-server-crl \
--vault-name <your-azure-key-vault-name> \
- --file ./my-server-ca.crl \
+ --file my-server-ca.crl \
--encoding hex \ --content-type application/pkix-crl ```
+ # [PowerShell](#tab/powershell)
+
+ For a DER encoded certificate in a file such as *./my-server-ca.der*, run the following commands:
+
+ ```powershell
+ # Upload CA certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-ca-der `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server-ca.der `
+ --encoding hex `
+ --content-type application/pkix-cert
+
+ # Upload the CRL as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-crl `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server-ca.crl `
+ --encoding hex `
+ --content-type application/pkix-crl
+ ```
+
+ For a PEM encoded certificate in a file such as *./my-server-ca.crt*, run the following commands:
+
+ ```powershell
+ # Upload CA certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-ca-crt `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server-ca.crt `
+ --encoding hex `
+ --content-type application/x-pem-file
+
+ # Upload the CRL as secret to Azure Key Vault
+ az keyvault secret set `
+ --name my-server-crl `
+ --vault-name <your-azure-key-vault-name> `
+ --file my-server-ca.crl `
+ --encoding hex `
+ --content-type application/pkix-crl
+ ```
+
+
+ 1. Configure the `aio-opc-ua-broker-issuer-list` custom resource in the cluster. Use a Kubernetes client such as `kubectl` to configure the secrets, such as `my-server-ca-der` or `my-server-ca-crt`, in the `SecretProviderClass` object array in the cluster. The following example shows a complete `SecretProviderClass` custom resource that contains the trusted OPC UA server certificate in a DER encoded file:
- ```yml
+ ```yaml
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
If your OPC UA server uses a certificate issued by a certificate authority (CA),
The following example shows a complete `SecretProviderClass` custom resource that contains the trusted OPC UA server certificate in a PEM encoded file with the .crt extension:
- ```yml
+ ```yaml
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
To complete the configuration of the application authentication mutual trust, yo
1. To extract the connector for OPC UA certificate into a `opcuabroker.crt` file, run the following command:
+ # [Bash](#tab/bash)
+ ```bash kubectl -n azure-iot-operations get secret aio-opc-opcuabroker-default-application-cert -o jsonpath='{.data.tls\.crt}' | base64 -d > opcuabroker.crt ```
- In PowerShell, you can complete the same task with the following command:
+ # [PowerShell](#tab/powershell)
```powershell kubectl -n azure-iot-operations get secret aio-opc-opcuabroker-default-application-cert -o jsonpath='{.data.tls\.crt}' | %{ [Text.Encoding]::UTF8.GetString([Convert]::FromBase64String($_)) } > opcuabroker.crt ```
+
+ 1. Many OPC UA servers only support certificates in the DER format. If necessary, use the following command to convert the _opcuabroker.crt_ certificate to _opcuabroker.der_: ```bash
To complete the configuration of the application authentication mutual trust, yo
## Configure an enterprise grade application instance certificate
-For production environments, you can configure the connector for OPC UA to use an enterprise grade application instance certificate. Typically, an enterprise certificate authority (CA) issues this certificate and you need the CA certificate to your configuration. Often, there's a hierarchy of CAs and you need to add the complete validation chain of CAs to your configuration.
+For production environments, you can configure the connector for OPC UA to use an enterprise grade application instance certificate. Typically, an enterprise CA issues this certificate and you need the CA certificate to your configuration. Often, there's a hierarchy of CAs and you need to add the complete validation chain of CAs to your configuration.
The following example references the following items:
The following example references the following items:
| `subjectName` | The subject name string embedded in the application instance certificate. | | `applicationUri` | The application instance URI embedded in the application instance. | | _enterprise-grade-ca-1.der_ | File that contains the enterprise grade CA certificate public key. |
-| _enterprise-grade-ca-1.crl_ | The CA's certificate revocation list (CRL) file. |
+| _enterprise-grade-ca-1.crl_ | The CA's CRL file. |
Like the previous examples, you use Azure Key Vault to store the certificates and CRLs. You then configure the `SecretProviderClass` custom resources in the connected cluster to project the certificates and CRLs into the connector for OPC UA pods. To configure the enterprise grade application instance certificate, complete the following steps: 1. Save the certificates and the CRL in Azure Key Vault as secrets by using the following commands:
- ```azcli
+ # [Bash](#tab/bash)
+
+ ```bash
# Upload the connector for OPC UA public key certificate as secret to Azure Key Vault az keyvault secret set \
- --name "opcuabroker-certificate-der" \
+ --name opcuabroker-certificate-der \
--vault-name <your-azure-key-vault-name> \
- --file ./opcuabroker-certificate.der \
+ --file opcuabroker-certificate.der \
--encoding hex \ --content-type application/pkix-cert # Upload connector for OPC UA private key certificate as secret to Azure Key Vault az keyvault secret set \
- --name "opcuabroker-certificate-pem" \
+ --name opcuabroker-certificate-pem \
--vault-name <your-azure-key-vault-name> \
- --file ./opcuabroker-certificate.pem \
+ --file opcuabroker-certificate.pem \
--encoding hex \ --content-type application/x-pem-file # Upload CA public key certificate as secret to Azure Key Vault az keyvault secret set \
- --name "enterprise-grade-ca-1-der" \
+ --name enterprise-grade-ca-1-der \
--vault-name <your-azure-key-vault-name> \
- --file ./enterprise-grade-ca-1.der \
+ --file enterprise-grade-ca-1.der \
--encoding hex \ --content-type application/pkix-cert # Upload CA certificate revocation list as secret to Azure Key Vault az keyvault secret set \
- --name "enterprise-grade-ca-1-crl" \
+ --name enterprise-grade-ca-1-crl \
--vault-name <your-azure-key-vault-name> \
- --file ./enterprise-grade-ca-1.crl \
+ --file enterprise-grade-ca-1.crl \
--encoding hex \ --content-type application/pkix-crl ```
+ # [PowerShell](#tab/powershell)
+
+ ```powershell
+ # Upload the connector for OPC UA public key certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name opcuabroker-certificate-der `
+ --vault-name <your-azure-key-vault-name> `
+ --file opcuabroker-certificate.der `
+ --encoding hex `
+ --content-type application/pkix-cert
+
+ # Upload connector for OPC UA private key certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name opcuabroker-certificate-pem `
+ --vault-name <your-azure-key-vault-name> `
+ --file opcuabroker-certificate.pem `
+ --encoding hex `
+ --content-type application/x-pem-file
+
+ # Upload CA public key certificate as secret to Azure Key Vault
+ az keyvault secret set `
+ --name enterprise-grade-ca-1-der `
+ --vault-name <your-azure-key-vault-name> `
+ --file enterprise-grade-ca-1.der `
+ --encoding hex `
+ --content-type application/pkix-cert
+
+ # Upload CA certificate revocation list as secret to Azure Key Vault
+ az keyvault secret set `
+ --name enterprise-grade-ca-1-crl `
+ --vault-name <your-azure-key-vault-name> `
+ --file enterprise-grade-ca-1.crl `
+ --encoding hex `
+ --content-type application/pkix-crl
+ ```
+
+
+ 1. Configure the `aio-opc-ua-broker-client-certificate` custom resource in the cluster. Use a Kubernetes client such as `kubectl` to configure the secrets `opcuabroker-certificate-der` and `opcuabroker-certificate-pem` in the `SecretProviderClass` object array in the cluster. The following example shows a complete `SecretProviderClass` custom resource after you add the secret configurations:
- ```yml
+ ```yaml
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
Like the previous examples, you use Azure Key Vault to store the certificates an
The following example shows a complete `SecretProviderClass` custom resource after you add the secret configurations:
- ```yml
+ ```yaml
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
Like the previous examples, you use Azure Key Vault to store the certificates an
1. Update the connector for OPC UA deployment to use the new `SecretProviderClass` source for application instance certificates by using the following command:
- ```azcli
+ # [Bash](#tab/bash)
+
+ ```bash
az k8s-extension update \ --version 0.3.0-preview \ --name opc-ua-broker \
Like the previous examples, you use Azure Key Vault to store the certificates an
--config securityPki.applicationUri=<applicationUri> ```
+ # [PowerShell](#tab/powershell)
+
+ ```powershell
+ az k8s-extension update `
+ --version 0.3.0-preview `
+ --name opc-ua-broker `
+ --release-train preview `
+ --cluster-name <cluster-name> `
+ --resource-group <azure-resource-group> `
+ --cluster-type connectedClusters `
+ --auto-upgrade-minor-version false `
+ --config securityPki.applicationCert=aio-opc-ua-broker-client-certificate `
+ --config securityPki.subjectName=<subjectName> `
+ --config securityPki.applicationUri=<applicationUri>
+ ```
+
+
+ Now that the connector for OPC UA uses the enterprise certificate, don't forget to add the new certificate's public key to the trusted certificate lists of all OPC UA servers it needs to connect to.
iot-operations Howto Manage Assets Remotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-manage-assets-remotely.md
This article describes how to use the operations experience web UI and the Azure
- Define the asset endpoints that connect assets to your Azure IoT Operations instance. - Add assets, and define their tags and events to enable dataflow from OPC UA servers to the MQTT broker.
-These assets, tags, and events map inbound data from OPC UA servers to friendly names that you can use in the MQTT broker and data processor pipelines.
+These assets, tags, and events map inbound data from OPC UA servers to friendly names that you can use in the MQTT broker and dataflows.
## Prerequisites To configure an assets endpoint, you need a running instance of Azure IoT Operations.
+To sign in to the operations experience web UI, you need a Microsoft Entra ID account with at least contributor permissions for the resource group that contains your **Kubernetes - Azure Arc** instance. You can't sign in with a Microsoft account (MSA). To create a suitable Microsoft Entra ID account in your Azure tenant:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with the same tenant and user name that you used to deploy Azure IoT Operations.
+1. In the Azure portal, go to the **Microsoft Entra ID** section, select **Users > +New user > Create new user**. Create a new user and make a note of the password, you need it to sign in later.
+1. In the Azure portal, go to the resource group that contains your **Kubernetes - Azure Arc** instance. On the **Access control (IAM)** page, select **+Add > Add role assignment**.
+1. On the **Add role assignment page**, select **Privileged administrator roles**. Then select **Contributor** and then select **Next**.
+1. On the **Members** page, add your new user to the role.
+1. Select **Review and assign** to complete setting up the new user.
+
+You can now use the new user account to sign in to the [Azure IoT Operations](https://iotoperations.azure.com) portal.
+ ## Sign in # [Operations experience](#tab/portal)
To sign in to the operations experience, go to the [operations experience](https
## Select your site
-After you sign in, the web UI displays a list of sites. Each site is a collection of Azure IoT Operations instances where you can configure and manage your assets. A site typically represents a physical location where a you have physcial assets deployed. Sites make it easier for you to locate and manage assets. Your [IT administrator is responsible for grouping instances in to sites](/azure/azure-arc/site-manager/overview). Any Azure IoT Operations instances that aren't assigned to a site appear in the **Unassigned instances** node. Select the site that you want to use:
+After you sign in, the web UI displays a list of sites. Each site is a collection of Azure IoT Operations instances where you can configure and manage your assets. A site typically represents a physical location where you have physical assets deployed. Sites make it easier for you to locate and manage assets. Your [IT administrator is responsible for grouping instances in to sites](/azure/azure-arc/site-manager/overview). Any Azure IoT Operations instances that aren't assigned to a site appear in the **Unassigned instances** node. Select the site that you want to use:
:::image type="content" source="media/howto-manage-assets-remotely/site-list.png" alt-text="Screenshot that shows a list of sites in the operations experience.":::
After you select a site, the operations experience displays a list of the Azure
> [!TIP] > You can use the filter box to search for instances.
+After you select your instance, the operations experience displays the **Overview** page for the instance. The **Overview** page shows the status of the instance and the resources, such as assets, that are associated with it:
++ # [Azure CLI](#tab/cli) Before you use the `az iot ops asset` commands, sign in to the subscription that contains your Azure IoT Operations deployment:
To learn more, see [az iot ops asset endpoint](/cli/azure/iot/ops/asset/endpoint
This configuration deploys a new `assetendpointprofile` resource called `opc-ua-connector-0` to the cluster. After you define an asset, a connector for OPC UA pod discovers it. The pod uses the asset endpoint that you specify in the asset definition to connect to an OPC UA server.
-When the OPC PLC simulator is running, dataflows from the simulator, to the connector, to the OPC UA broker, and finally to the MQTT broker.
+When the OPC PLC simulator is running, data flows from the simulator, to the connector for OPC UA, and then to the MQTT broker.
### Configure an asset endpoint to use a username and password
Now you can define the tags associated with the asset. To add OPC UA tags:
| Node ID | Tag name | Observability mode | | - | -- | |
- | ns=3;s=FastUInt10 | temperature | None |
- | ns=3;s=FastUInt100 | Tag 10 | None |
+ | ns=3;s=FastUInt10 | Temperature | None |
+ | ns=3;s=FastUInt100 | Humidity | None |
1. Select **Manage default settings** to configure default telemetry settings for the asset. These settings apply to all the OPC UA tags that belong to the asset. You can override these settings for each tag that you add. Default telemetry settings include:
To view activity logs as the resource level, select the resource that you want t
## Related content - [Connector for OPC UA overview](overview-opcua-broker.md)-- [Akri services overview](overview-akri.md) - [az iot ops asset](/cli/azure/iot/ops/asset) - [az iot ops asset endpoint](/cli/azure/iot/ops/asset/endpoint)
iot-operations Howto Secure Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/howto-secure-assets.md
You can also use the following tools to configure RBAC on your resources:
## Related content - [Connector for OPC UA overview](overview-opcua-broker.md)-- [Akri services overview](overview-akri.md) - [az iot ops asset](/cli/azure/iot/ops/asset) - [az iot ops asset endpoint](/cli/azure/iot/ops/asset/endpoint)
iot-operations Overview Akri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/overview-akri.md
- Title: Detect assets with the Akri services
-description: Understand how the Akri services enable you to discover devices and assets at the edge, and expose them as resources on your cluster.
-----
- - ignite-2023
Previously updated : 05/13/2024-
-# CustomerIntent: As an industrial edge IT or operations user, I want to to understand how the Akri services enable me to discover devices and assets at the edge, and expose them as resources on a Kubernetes cluster.
--
-# What are the Akri services?
--
-The Akri services host the discovery handlers that enable you to detect devices and assets at the edge, and expose them as resources on a Kubernetes cluster. Use the Akri services to simplify the process of projecting leaf devices such as OPC UA devices, cameras, IoT sensors, and peripherals into your cluster. The Akri services use the devices' own protocols to project leaf devices into your cluster. For administrators who attach or remove devices from a cluster, this capability reduces the amount of coordination and manual configuration required.
-
-The Akri services are also extensible. You can use them as shipped, or you can add custom discovery and provisioning capabilities by adding protocol handlers, brokers, and behaviors.
-
-The Akri services are a Microsoft-managed commercial version of [Akri](https://docs.akri.sh/), an open-source Cloud Native Computing Foundation (CNCF) project.
-
-## Leaf device integration challenges
-
-It's common to run Kubernetes directly on infrastructure. But to integrate non-Kubernetes IoT leaf devices into a Kubernetes cluster requires a unique solution.
-
-IoT leaf devices present the following challenges, They:
--- Contain hardware that's too small, too old, or too locked-down to run Kubernetes.-- Use various protocols and different topologies.-- Have intermittent downtime and availability.-- Require different methods of authentication and secret storage.-
-## Core capabilities
-
-To address the challenge of integrating non-Kubernetes IoT leaf devices, the Akri services have several core capabilities:
-
-### Device discovery
-
-Akri services deployments can include fixed-network discovery handlers. Discovery handlers enable assets from known network endpoints to find leaf devices as they appear on device interfaces or local subnets. Examples of network endpoints include OPC UA servers at a fixed IP address, and network scanning discovery handlers.
-
-### Dynamic provisioning
-
-Another capability of the Akri services is dynamic device provisioning.
-
-With the Akri services, you can dynamically provision devices such as:
--- USB cameras to use in your cluster.-- IP cameras that you don't want to look up IP addresses for.-- OPC UA server simulations running on your host machine that you use to test Kubernetes workloads.-
-### Compatibility with Kubernetes
-
-The Akri services use standard Kubernetes primitives that let you apply your existing expertise and knowledge. Small devices connected to an Akri-configured cluster can appear as Kubernetes resources, just like memory or CPUs. The Akri services controller enables the cluster operator to start brokers, jobs, or other workloads for individual connected devices or groups of devices. These device configurations and properties remain in the cluster so that if there's node failure, other nodes can pick up any lost work.
-
-## Discover OPC UA assets
-
-The Akri services are a turnkey solution that lets you discover and create assets connected to an OPC UA server at the edge. The Akri services discover devices at the edge and maps them to assets in your cluster. The assets send telemetry to upstream connectors. The Akri services let you eliminate the painstaking process of manually configuring and onboarding the assets to your cluster.
-
-## Key features
-
-The following list shows the key features of the Akri
--- **Dynamic discovery**. Protocol representations of devices can come and go, without static configurations in brokers or customer containers. To discover devices, the Akri services use the following methods:-
- - **Device network scanning**. This capability is useful for finding devices in smaller, remote locations such as a replacement camera in a store. The ONVIF and OPC UA localhost protocols currently support device network scanning discovery.
- - **Device connecting**. This capability is typically used in larger industrial scenarios such as factory environments where the network is typically static and network scanning isn't permitted. The `udev` and OPC UA local discovery server protocols currently support device connecting discovery.
- - **Device attach**. The Akri services also support custom logic for mapping or connecting devices. There are [open-source templates](https://docs.akri.sh/development/handler-development) to accelerate customization.
--- **Optimal scheduling**. The Akri services can schedule devices on specified nodes with minimal latency because it knows where particular devices are located on the Kubernetes cluster. Optimal scheduling applies to directly connected devices, or in scenarios where only specific nodes can access the devices.--- **Optimal configuration**. The Akri services use the capacity of the node to drive cardinality of the brokers for the discovered devices.--- **Secure credential management**. The Akri services facilitate secure access to assets and devices by integrating with services in the cluster that enable secure distribution of credential material to brokers.-
-### Features supported
-
-The Akri services support the following features:
-
-| [CNCF Akri Features](https://docs.akri.sh/) | Supported |
-| - | :-: |
-| Dynamic discovery of devices at the edge (supported protocols: OPC UA, ONVIF, udev) | ✅ |
-| Schedule devices with minimal latency using Akri's information on node affinity on the cluster | ✅ |
-| View Akri metrics and logs locally through Prometheus and Grafana | ✅ |
-| Secrets and credentials management | ✅ |
-| M:N device to broker ratio through configuration-level resource support | ✅ |
-| Observability on Akri deployments through Prometheus and Grafana dashboards | ✅ |
-
-| Akri services features | Supported |
-|--|::|
-| Installation through the Akri services Arc cluster extension | ✅ |
-| Deployment through the orchestration service | ✅ |
-| Onboard devices as custom resources to an edge cluster | ✅ |
-| View the Akri services metrics and logs through Azure Monitor | ❌ |
-| Akri services configuration by using the operations experience web UI | ❌ |
-| The Akri services detect and create assets that can be ingested into the Azure Device Registry | ❌ |
-| ISVs can build and sell custom protocol handlers for Azure IoT Operations solutions | ❌ |
-
-## Related content
-
-To learn more about the Akri services, see:
--- [Akri services architecture](concept-akri-architecture.md)-- [Discover OPC UA data sources using the Akri services](howto-autodetect-opcua-assets-using-akri.md)-
-To learn more about the open-source CNCF Akri, see the following resources:
--- [Documentation](https://docs.akri.sh/)-- [OPC UA Sample on AKS Edge Essentials](/azure/aks/hybrid/aks-edge-how-to-akri-opc-ua)-- [ONVIF Sample on AKS Microsoft Edge Essentials](/azure/aks/hybrid/aks-edge-how-to-akri-onvif)
iot-operations Overview Manage Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/overview-manage-assets.md
description: Understand concepts and options needed to manage the assets that ar
Previously updated : 05/13/2024 Last updated : 08/27/2024 ai-usage: ai-assisted # CustomerIntent: As an industrial edge IT or operations user, I want to understand the key components in the Azure IoT Operations for managing assets, so that I can effectively manage the assets in my solution.
ai-usage: ai-assisted
In Azure IoT Operations Preview, a key task is to manage the assets that are part of your solution. This article: -- Defines what assets are in the context Azure IoT Operations.-- Provides an overview the services you use to manage your assets.
+- Defines what assets are in the context of Azure IoT Operations.
+- Provides an overview of the services that you use to manage your assets.
- Explains the most common use cases for the services. ## Understand assets
-Assets are a core element of an Azure IoT Operations solution.
+Assets are a core element of an Azure IoT Operations solution. In Azure IoT Operations, an *asset* is a logical entity that you create to represent a real asset. An Azure IoT Operations asset can emit telemetry and events. You use these logical asset instances to reference the real assets in your industrial edge environment.
-An *asset* in an industrial edge environment is any item of value that you want to manage, monitor, and collect data from. An asset can be a machine, a software component, an entire system, or a physical object of value such as a field of crops, or a building. These assets are examples that exist in manufacturing, retail, energy, healthcare, and other sectors.
-
-An *asset* in Azure IoT Operations is a logical entity that you create to represent a real asset. An Azure IoT Operations asset can emit telemetry and events. You use these logical asset instances to manage the real assets in your industrial edge environment.
-
-> [!TIP]
-> Assets maybe related to IoT devices. While all IoT devices are assets, not all assets are devices. An *IoT device* is a physical object connected to the internet to collect, generate, and communicate data. IoT devices typically contain embedded components to perform specific functions. They can manage or monitor other things in their environment. Examples of IoT devices include crop sensors, smart thermostats, connected security cameras, wearable devices, and monitoring devices for manufacturing machinery or vehicles.
+Assets connect to Azure IoT Operations instances through *asset endpoints*, which are the OPC UA servers that have southbound connections to one or more assets.
## Understand services for managing assets
The following diagram shows the high-level architecture of Azure IoT Operations.
:::image type="content" source="media/overview-manage-assets/azure-iot-operations-architecture.svg" alt-text="Diagram that highlights the services used to manage assets." lightbox="media/overview-manage-assets/azure-iot-operations-architecture.png"::: - The **operations experience** is a web UI that lets you create and configure assets in your solution. The web UI simplifies the task of managing assets and is the recommended service to manage assets.-- **Azure Device Registry Preview** is a service that projects assets defined in your edge environment as Azure resources in the cloud. Device Registry lets you manage your assets in the cloud as Azure resources contained in a single unified registry.-- **Akri services** automatically discover assets at the edge. The services can detect assets in the address space of an OPC UA server.-- The _connector for OPC UA_ is a data ingress and protocol translation service that enables Azure IoT Operations to ingress data from your assets. The broker receives telemetry and events from your assets and publishes the data to topics in the MQTT broker. The broker is based on the widely used OPC UA standard.-
-Each of these services is explained in greater detail in the following sections.
+- **Azure Device Registry Preview** is a backend service that enables the cloud and edge management of assets. Device Registry projects assets defined in your edge environment as Azure resources in the cloud. It provides a single unified registry so that all apps and services that interact with your assets can connect to a single source. Device Registry also manages the synchronization between assets in the cloud and assets as custom resources in Kubernetes on the edge.
+- The schema registry is a service that lets you define and manage the schema for your assets. Dataflows use schemas to deserialize and serialize messages.
+- The **connector for OPC UA** is a data ingress and protocol translation service that enables Azure IoT Operations to ingress data from your assets. The broker receives telemetry and events from your assets and publishes the data to topics in the MQTT broker. The broker is based on the widely used OPC UA standard.
## Create and manage assets remotely
The following tasks are useful for operations teams in sectors such as industry,
- Create assets remotely - To access asset data, subscribe to OPC UA tags and events
-The operations experience web UI lets operations teams perform these tasks in a simplified web interface. The operations experience uses the other services described previously, to complete these tasks. You can also use the Azure IoT Operations CLI to manage assets by using the `az iot ops asset` command.
+The operations experience web UI lets operations teams perform these tasks in a simplified web interface. The operations experience uses the other services described previously, to complete these tasks. You can also use the Azure IoT Operations CLI to manage assets by using the [az iot ops asset](/cli/azure/iot/ops/asset) set of commands.
The operations experience uses the connector for OPC UA to exchange data with local OPC UA servers. OPC UA servers are software applications that communicate with assets. The connector for OPC UA exposes: -- OPC UA tags that represent data points. OPC UA tags provide real-time or historical data about the asset, and you can configure how frequently to sample the tag value.-- OPC UA events that represent state changes. OPC UA events provide real-time status information for your assets that let you configure alarms and notifications.
+- OPC UA *tags* that represent data points. OPC UA tags provide real-time or historical data about the asset, and you can configure how frequently to sample the tag value.
+- OPC UA *events* that represent state changes. OPC UA events provide real-time status information for your assets that lets you configure alarms and notifications.
The operations experience lets users create assets and subscribe to OPC UA tags in a user-friendly interface. Users can create custom assets by providing asset details and configurations. Users can create or import tag and event definitions, subscribe to them, and assign them to an asset.
-## Manage assets as Azure resources in a centralized registry
+## Store assets as Azure resources in a centralized registry
-In an industrial edge environment with many assets, it's useful for IT and operations teams to have a single registry for devices and assets. Azure Device Registry Preview provides this capability, and projects your industrial assets as Azure resources. Teams that use Device Registry together with the operations experience, have a consistent deployment and management experience across cloud and edge environments.
+When you create an asset in the operations experience or by using the Azure IoT Operations CLI extension, that asset is defined in Azure Device Registry Preview.
+
+Device Registry provides a single registry for devices and assets across applications running in the cloud or on the edge. In the cloud, assets are created as Azure resources, which give you management capabilities over them like organizing assets with resource groups and tags. On the edge, Device Registry creates a Kubernetes custom resource for each asset and keeps the two asset representations in sync.
Device Registry provides several capabilities that help teams to manage assets: -- **Unified registry**. The Device Registry serves as the single source of truth for your asset metadata. Having a single registry can streamline and simplify the process of managing assets. It gives you a way to access and manage this data across Azure, partner, and customer applications running in the cloud or on the edge.-- **Assets as Azure resources**. Because Device Registry projects assets as true Azure resources, you can manage assets using established Azure features and services. Enterprises can use [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's native deployment and management service, with industrial assets. Azure Resource Manager provides capabilities such as resource groups, tags, role-based access controls ([RBAC](../../role-based-access-control/overview.md)), policy, logging, and audit.-- **Cloud management of assets**. You use Device Registry within the operations experience to remotely manage assets in the cloud. All interactions with the asset resource are also available by using Azure APIs and using management tools such as [Azure Resource Graph](../../governance/resource-graph/overview.md). Regardless which method you use to manage assets, changes made in the cloud are synced to the edge and exposed as custom resources in the Kubernetes cluster.
+- **Unified registry**. The Device Registry serves as the single source of truth for your asset metadata. Having a single registry gives you a way to access and manage assets across Azure, partner, and customer applications running in the cloud or on the edge.
+- **Assets as Azure resources**. Because Device Registry projects assets as true Azure resources, you can manage assets using established Azure features and services. Enterprises can use Azure Resource Manager, Azure's native deployment and management service, with industrial assets. Azure Resource Manager provides capabilities such as resource groups, tags, role-based access controls (RBAC), policy, logging, and audit.
+- **Cloud management of assets**. You can manage assets by using the operations experience or by using Azure APIs and management tools such as Azure Resource Graph. Regardless of which method you use to manage assets, changes made in the cloud are synced to the edge and exposed as custom resources in the Kubernetes cluster.
-The following screenshot shows an example thermostat asset in the operations experience:
+For example, the following set of screenshots shows a single asset, in this case a thermostat, viewed both in cloud management tools and on an Azure IoT Operations cluster. The first screenshot shows the thermostat asset in the operations experience:
:::image type="content" source="media/overview-manage-assets/asset-operations-portal.png" alt-text="A screenshot that shows the thermostat asset in the operations experience.":::
-The following screenshot shows the example thermostat asset in the Azure portal:
+This screenshot shows the same thermostat asset in the Azure portal:
:::image type="content" source="media/overview-manage-assets/asset-portal.png" alt-text="A screenshot that shows the thermostat asset in the Azure portal.":::
-The following screenshot shows the example thermostat asset as a Kubernetes custom resource:
+And the final screenshot shows the same thermostat asset as a Kubernetes custom resource:
:::image type="content" source="media/overview-manage-assets/asset-kubernetes.png" alt-text="A screenshot that shows the thermostat asset as a Kubernetes custom resource.":::
-The following features are supported in Azure Device Registry:
-
-| Feature | Supported |
-| -- | :-: |
-| Asset resource management by using Azure API | ✅ |
-| Asset resource management by using operations experience | ✅ |
-| Asset synchronization to Kubernetes cluster running Azure IoT Operations | ✅ |
-| Asset as Azure resource (with capabilities such as Azure resource groups and tags) | ✅ |
-
-## Discover edge assets
-
-A common task in complex edge solutions is to discover assets and automatically add them to your Kubernetes cluster. The Akri services provide this capability. For administrators who attach or remove assets from the cluster, the Akri services reduce the amount of coordination and manual configuration.
-
-The Akri services include fixed-network discovery handlers. Discovery handlers enable assets from known network endpoints to find leaf devices as they appear on device interfaces or local subnets. Examples of network endpoints include OPC UA servers at a fixed IP address, and network scanning discovery handlers.
-
-The Akri services are installed as part of Azure IoT Operations and you can configure them alongside the OPC UA simulation PLC server. The OPC UA discovery handler starts automatically and inspects the OPC UA simulation PLC server's address space. The discovery handler reports assets back to the Akri services and triggers deployment of the `AssetEndpointProfile` and `Asset` custom resources into the cluster.
- ## Use a common data exchange standard for your edge solution A key requirement in industrial environments is for a common standard or protocol for machine-to-machine and machine-to-cloud data exchange. By using a widely supported data exchange protocol, you can simplify the process to enable diverse industrial assets to exchange data with each other, with workloads running in your Kubernetes cluster, and with the cloud. [OPC UA](https://opcfoundation.org/about/opc-technologies/opc-ua/) is a specification for a platform independent service-oriented architecture that enables data exchange in industrial environments.
iot-operations Overview Opcua Broker Certificates Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/overview-opcua-broker-certificates-management.md
Previously updated : 05/15/2024 Last updated : 09/16/2024 # CustomerIntent: As an industrial edge IT or operations user, I want to understand how the OPC UA industrial edge Kubernetes environment should be configured to enable mutual trust between the connector for OPC UA and the downstream OPC UA servers.
Mutual trust validation between the OPC UA server and the connector for OPC UA i
You need to maintain a trusted certificate list that contains the certificates of all the OPC UA servers that the connector for OPC UA trusts. To create a session with an OPC UA server: - The connector for OPC UA sends its certificate's public key.-- The OPC UA server validates against its trusted certificates list.-- A similar validation of the OPC UA server's certificate happens in the connector for OPC UA.
+- The OPC UA server validates the connector's certificate against its trusted certificates list.
+- The connector validates the OPC UA server's certificate against its trusted certificates list.
By default, the connector for OPC UA stores its trusted certificate list in Azure Key Vault and uses the [Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/) to project the trusted certificates into the connector for OPC UA pods. Azure Key Vault stores the certificates encoded in DER or PEM format. If the connector for OPC UA trusts a certificate authority, it automatically trusts any server that has a valid application instance certificate signed by the certificate authority.
-To project the trusted certificates from Azure Key Vault into the Kubernetes cluster, you must configure a `SecretProviderClass` custom resource. This custom resource contains a list of all the secret references associated with the trusted certificates. The connector for OPC UA uses the custom resource to map the trusted certificates into connector for OPC UA containers and make them available for validation. The default name for the `SecretProviderClass` custom resource that handles the trusted certificates list is *aio-opc-ua-broker-trust-list*.
+To learn how to project the trusted certificates from Azure Key Vault into the Kubernetes cluster, see [Manage secrets for your Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-manage-secrets.md).
-> [!NOTE]
-> The time it takes to project Azure Key Vault certificates into the cluster depends on the configured polling interval.
+The default name for the `SecretProviderClass` custom resource that handles the trusted certificates list is *aio-opc-ua-broker-trust-list*.
## The connector for OPC UA issuer certificates list
-If your OPC UA server's application instance certificate is signed by an intermediate certificate authority, but you don't want to automatically trust all the certificates issued by the certificate authority, you can use an issuer certificate list to manage the trust relationship.
+If your OPC UA server's application instance certificate is signed by an intermediate certificate authority, but you don't want to automatically trust all the certificates issued by the certificate authority, you can use an issuer certificate list to manage the trust relationship. This _issuer certificate list_ stores the certificate authority certificates that the connector for OPC UA trusts.
-An _issuer certificate list_ stores the certificate authority certificates that the connector for OPC UA trusts. If the application certificate of an OPC UA server is signed by an intermediate certificate authority, the connector for OPC UA validates the full chain of certificate authorities up to the root. The issuer certificate list should contain the certificates of all the certificate authorities in the chain to ensure that the connector for OPC UA can validate the OPC UA servers.
+If the application certificate of an OPC UA server is signed by an intermediate certificate authority, the connector for OPC UA validates the full chain of certificate authorities up to the root. The issuer certificate list should contain the certificates of all the certificate authorities in the chain to ensure that the connector for OPC UA can validate the OPC UA servers.
You manage the issuer certificate list in the same way you manage the trusted certificates list. The default name for the `SecretProviderClass` custom resource that handles the issuer certificates list is *aio-opc-ua-broker-issuer-list*.
The following table shows the feature support level for authentication in the cu
| Handling of OPC UA issuer certificates lists | Supported | ✅ | | Configuration of OPC UA enterprise grade application instance certificate | Supported | ✅ | | Handling of OPC UA untrusted certificates | Unsupported | ❌ |
-| Handling of OPC UA Global Discovery Service (GDS) | Unsupported | ❌ |
+| Handling of OPC UA Global Discovery Service | Unsupported | ❌ |
iot-operations Overview Opcua Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/discover-manage-assets/overview-opcua-broker.md
As part of Azure IoT Operations, the connector for OPC UA is a native Kubernetes
- Publishes JSON-encoded telemetry data from OPC UA servers in OPC UA PubSub format, using a JSON payload. By using this standard format for data exchange, you can reduce the risk of future compatibility issues. - Connects to Azure Arc-enabled services in the cloud.
-The connector for OPC UA includes an OPC UA simulation server that you can use to test your applications. To learn more, see [Configure an OPC PLC simulator to work with the connector for OPC UA](howto-configure-opc-plc-simulator.md).
- ### Other features The connector for OPC UA supports the following features as part of Azure IoT Operations:
The connector for OPC UA supports the following features as part of Azure IoT Op
- Automatic reconnection to OPC UA servers. - Integrated [OpenTelemetry](https://opentelemetry.io/) compatible observability. - OPC UA transport encryption.-- Anonymous authentication and authentication based on username and password.
+- Anonymous authentication and authorization based on username and password.
- `AssetEndpointProfile` and `Asset` CRs configurable by using Azure REST API and the operations experience web UI.-- Akri-supported asset detection of OPC UA assets. The assets must be [OPC UA Companion Specifications](https://opcfoundation.org/about/opc-technologies/opc-ua/ua-companion-specifications/) compliant. ## How it works
The connector for OPC UA application:
- Creates a separate subscription in the session for each 1,000 tags. - Creates a separate subscription for each event defined in the asset. - Implements retry logic to establish connections to endpoints that don't respond after a specified number of keep-alive requests. For example, there could be a nonresponsive endpoint in your environment when an OPC UA server stops responding because of a power outage.-
-The OPC UA discovery handler:
--- Uses the Akri configuration to connect to an OPC UA server. After the connection is made, the discovery handler inspects the OPC UA address space, and tries to detect assets that comply with the [OPC UA Companion Specifications](https://opcfoundation.org/about/opc-technologies/opc-ua/ua-companion-specifications/).-- Creates `Asset` and `AssetEndpointProfile` CRs in the cluster.-
-> [!NOTE]
-> Asset detection by Akri only works for OPC UA servers that don't require user or transport authentication.
-
-To learn more about Akri, see [What are the Akri services?](overview-akri.md).
iot-operations Quickstart Add Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-add-assets.md
- ignite-2023 Previously updated : 07/23/2024 Last updated : 09/17/2024 #CustomerIntent: As an OT user, I want to create assets in Azure IoT Operations so that I can subscribe to asset data points, and then process the data before I send it to the cloud.
In this quickstart, you use the operations experience web UI to create your asse
## Prerequisites
-Complete [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](quickstart-deploy.md) before you begin this quickstart.
+Have an instance of Azure IoT Operations Preview deployed in a Kubernetes cluster. The [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](quickstart-deploy.md) provides simple instructions to deploy an Azure IoT Operations instance that you can use for the quickstarts.
-To sign in to the operations experience, you need a work or school account in the tenant where you deployed Azure IoT Operations. If you're currently using a Microsoft account (MSA), you need to create a Microsoft Entra ID with at least contributor permissions for the resource group that contains your **Kubernetes - Azure Arc** instance. To learn more, see [Known Issues > Create Entra account](../troubleshoot/known-issues.md#known-issues-azure-iot-operations-preview).
+To sign in to the operations experience web UI, you need a Microsoft Entra ID account with at least contributor permissions for the resource group that contains your **Kubernetes - Azure Arc** instance. To learn more, see [Operations experience web UI](../discover-manage-assets/howto-manage-assets-remotely.md#prerequisites).
+
+Unless otherwise noted, you can run the console commands in this quickstart in either a Bash or PowerShell environment.
## What problem will we solve?
-The data that OPC UA servers expose can have a complex structure and can be difficult to understand. Azure IoT Operations provides a way to model OPC UA assets as tags, events, and properties. This modeling makes it easier to understand the data and to use it in downstream processes such as the MQTT broker and data processor pipelines.
+The data that OPC UA servers expose can have a complex structure and can be difficult to understand. Azure IoT Operations provides a way to model OPC UA assets as tags, events, and properties. This modeling makes it easier to understand the data and to use it in downstream processes such as the MQTT broker and dataflows.
+
+## Deploy the OPC PLC simulator
+
+This quickstart uses the OPC PLC simulator to generate sample data. To deploy the OPC PLC simulator, run the following command:
+
+<!-- TODO: Change branch to main before merging the release branch -->
+
+```console
+kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/opc-plc-deployment.yaml
+```
+
+The following snippet shows the YAML file that you applied:
++
+> [!CAUTION]
+> This configuration isn't secure. Don't use this configuration in a production environment.
## Sign into the operations experience
To create asset endpoints, assets and subscribe to OPC UA tags and events, use t
Browse to the [operations experience](https://iotoperations.azure.com) in your browser and sign in with your Microsoft Entra ID credentials. -
-> [!IMPORTANT]
-> You must use a work or school account to sign in to the operations experience. To learn more, see [Known Issues > Create Entra account](../troubleshoot/known-issues.md#known-issues-azure-iot-operations-preview).
- ## Select your site A _site_ is a collection of Azure IoT Operations instances. Sites typically group instances by physical location and make it easier for OT users to locate and manage assets. Your IT administrator creates [sites and assigns Azure IoT Operations instances to them](/azure/azure-arc/site-manager/overview). Because you're working with a new deployment, there are no sites yet. You can find the cluster you created in the previous quickstart by selecting **Unassigned instances**. In the operations experience, an instance represents a cluster where you deployed Azure IoT Operations. + ## Select your instance Select the instance where you deployed Azure IoT Operations in the previous quickstart:
To add an asset endpoint:
kubectl get assetendpointprofile -n azure-iot-operations ```
-## Configure the simulator
-
-These quickstarts use the **OPC PLC simulator** to generate sample data. To enable the quickstart scenario, you need to configure your asset endpoint to connect without mutual trust established. This configuration is not recommended for production or pre-production environments:
-
-1. To configure the asset endpoint for the quickstart scenario, run the following command:
-
- ```console
- kubectl patch AssetEndpointProfile opc-ua-connector-0 -n azure-iot-operations --type=merge -p '{"spec":{"additionalConfiguration":"{\"applicationName\":\"opc-ua-connector-0\",\"security\":{\"autoAcceptUntrustedServerCertificates\":true}}"}}'
- ```
-
- > [!CAUTION]
- > Don't use this configuration in production or pre-production environments. Exposing your cluster to the internet without proper authentication might lead to unauthorized access and even DDOS attacks.
-
- To learn more, see [Deploy the OPC PLC simulator](../discover-manage-assets/howto-configure-opc-plc-simulator.md) section.
-
-1. To enable the configuration changes to take effect immediately, first find the name of your `aio-opc-supervisor` pod by using the following command:
-
- ```console
- kubectl get pods -n azure-iot-operations
- ```
-
- The name of your pod looks like `aio-opc-supervisor-956fbb649-k9ppr`.
-
-1. Restart the `aio-opc-supervisor` pod by using a command that looks like the following example. Use the `aio-opc-supervisor` pod name from the previous step:
-
- ```console
- kubectl delete pod aio-opc-supervisor-956fbb649-k9ppr -n azure-iot-operations
- ```
-
-After you define an asset, a connector for OPC UA pod discovers it. The pod uses the asset endpoint that you specify in the asset definition to connect to an OPC UA server. You can use `kubectl` to view the discovery pod that was created when you added the asset endpoint. The pod name looks like `aio-opc-opc.tcp-1-8f96f76-kvdbt`:
-
-```console
-kubectl get pods -n azure-iot-operations
-```
-
-When the OPC PLC simulator is running, dataflows from the simulator, to the connector for OPC UA, and finally to the MQTT broker.
- ## Manage your assets After you select your instance in operations experience, you see the available list of assets on the **Assets** page. If there are no assets yet, this list is empty:
Review your asset and tag details and make any adjustments you need before you s
:::image type="content" source="media/quickstart-add-assets/review-asset.png" alt-text="Screenshot of Azure IoT Operations create asset review page.":::
+This configuration deploys a new asset called `thermostat` to the cluster. You can use `kubectl` to view the assets:
+
+```console
+kubectl get assets -n azure-iot-operations
+```
+ ## Verify data is flowing [!INCLUDE [deploy-mqttui](../includes/deploy-mqttui.md)]
Client $server-generated/05a22b94-c5a2-4666-9c62-837431ca6f7e received PUBLISH (
{"temperature":{"SourceTimestamp":"2024-07-29T15:02:21.1856798Z","Value":4562},"Tag 10":{"SourceTimestamp":"2024-07-29T15:02:21.1857211Z","Value":4562}} ```
-> [!TIP]
-> Data from an asset with a name that starts with _boiler-_ is from an asset that was automatically discovered. This is not the same asset as the thermostat asset you created.
- If there's no data flowing, restart the `aio-opc-opc.tcp-1` pod: 1. Find the name of your `aio-opc-opc.tcp-1` pod by using the following command:
The sample tags you added in the previous quickstart generate messages from your
} ```
-## Discover OPC UA data sources by using Akri services
-
-In the previous section, you saw how to add assets manually. You can also use Akri services to automatically discover OPC UA data sources and create Akri instance custom resources that represent the discovered devices. Currently, Akri services can't detect and create assets that can be ingested into the Azure Device Registry Preview. Therefore, you can't currently manage assets discovered by Akri in the Azure portal.
-
-When you deploy Azure IoT Operations, the deployment includes the Akri discovery handler pods. To verify these pods are running, run the following command:
-
-```console
-kubectl get pods -n azure-iot-operations | grep akri
-```
-
-The output from the previous command looks like the following example:
-
-```output
-aio-akri-otel-collector-5c775f745b-g97qv 1/1 Running 3 (4h15m ago) 2d23h
-aio-akri-agent-daemonset-mp6v7 1/1 Running 3 (4h15m ago) 2d23h
-```
-
-Use the following command to verify that the discovery pod is running:
-
-```console
-kubectl get pods -n azure-iot-operations | grep discovery
-```
-
-The output from the previous command looks like the following example:
-
-```output
-aio-opc-asset-discovery-wzlnj 1/1 Running 0 19m
-```
-
-To configure the Akri services to discover OPC UA data sources, create an Akri configuration that references your OPC UA source. Run the following command to create the configuration:
-
-```console
-kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/akri-opcua-asset.yaml
-```
-
-The following snippet shows the YAML file that you applied:
--
-> [!IMPORTANT]
-> There's currently a known issue where the configuration for the asset endpoint contains an invalid setting. To work around this issue, you need to remove the `"securityMode":"none"` setting from the configuration for the `opc-ua-broker-opcplc-000000-50000` asset endpoint. To learn more, see [Connector for OPC UA](../troubleshoot/known-issues.md#akri-services).
-
-To verify the configuration, run the following command to view the Akri instances that represent the OPC UA data sources discovered by Akri services. You might need to wait a few minutes for the configuration to be available:
-
-```console
-kubectl get akrii -n azure-iot-operations
-```
-
-The output from the previous command looks like the following example.
-
-```output
-NAME CONFIG SHARED NODES AGE
-akri-opcua-asset-dbdef0 akri-opcua-asset true ["k3d-k3s-default-server-0"] 45s
-```
-
-Now you can use these resources in the local cluster namespace.
-
-To confirm that the Akri services are connected to the connector for OPC UA, copy and paste the name of the Akri instance from the previous step into the following command:
-
-```console
-kubectl get akrii <AKRI_INSTANCE_NAME> -n azure-iot-operations -o json
-```
-
-The command output looks like the following example. This example excerpt from the output shows the Akri instance `brokerProperties` values and confirms that it's connected the connector for OPC UA.
-
-```json
-"spec": {
-
- "brokerProperties": {
- "ApplicationUri": "Boiler #2",
- "AssetEndpointProfile": "{\"spec\":{\"uuid\":\"opc-ua-broker-opcplc-000000-azure-iot-operation\"……
-```
- ## How did we solve the problem? In this quickstart, you added an asset endpoint and then defined an asset and tags. The assets and tags model data from the OPC UA server to make the data easier to use in an MQTT broker and other downstream processes. You use the thermostat asset you defined in the next quickstart. ## Clean up resources
-If you won't use this deployment further, delete the Kubernetes cluster where you deployed Azure IoT Operations and remove the Azure resource group that contains the cluster.
+If you're continuing on to the next quickstart, keep all of your resources.
+ ## Next step
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-deploy.md
Previously updated : 05/02/2024 Last updated : 10/02/2024 #CustomerIntent: As a < type of user >, I want < what? > so that < why? >.
Last updated 05/02/2024
In this quickstart, you deploy a suite of IoT services to an Azure Arc-enabled Kubernetes cluster so that you can remotely manage your devices and workloads. Azure IoT Operations is a digital operations suite of services. This quickstart guides you through using Orchestrator to deploy these services to a Kubernetes cluster. At the end of the quickstart, you have a cluster that you can manage from the cloud that generates sample data to use in the following quickstarts.
-The services deployed in this quickstart include:
+The rest of the quickstarts in this end-to-end series build on this one to define sample assets, data processing pipelines, and visualizations.
-* [MQTT broker](../manage-mqtt-broker/overview-iot-mq.md)
-* [Connector for OPC UA](../discover-manage-assets/overview-opcua-broker.md) with simulated thermostat asset to start generating data
-* [Akri services](../discover-manage-assets/overview-akri.md)
-* [Azure Device Registry Preview](../discover-manage-assets/overview-manage-assets.md#manage-assets-as-azure-resources-in-a-centralized-registry)
-* [Observability](../configure-observability-monitoring/howto-configure-observability.md)
-
-The following quickstarts in this series build on this one to define sample assets, data processing pipelines, and visualizations. If you want to deploy Azure IoT Operations to a cluster such as AKS Edge Essentials in order to run your own workloads, see [Prepare your Azure Arc-enabled Kubernetes cluster](../deploy-iot-ops/howto-prepare-cluster.md?tabs=aks-edge-essentials) and [Deploy Azure IoT Operations Preview to an Arc-enabled Kubernetes cluster](../deploy-iot-ops/howto-deploy-iot-operations.md).
+If you want to deploy Azure IoT Operations to a local cluster such as Azure Kubernetes Service Edge Essentials or K3s on Ubuntu, see [Deployment details](../deploy-iot-ops/overview-deploy.md).
## Before you begin
-This series of quickstarts is intended to help you get started with Azure IoT Operations as quickly as possible so that you can evaluate an end-to-end scenario. In a true development or production environment, these tasks would be performed by multiple teams working together and some tasks might require elevated permissions.
+This series of quickstarts is intended to help you get started with Azure IoT Operations as quickly as possible so that you can evaluate an end-to-end scenario. In a true development or production environment, multiple teams working together perform these tasks and some tasks might require elevated permissions.
For the best new user experience, we recommend using an [Azure free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) so that you have owner permissions over the resources in these quickstarts. We also provide steps to use GitHub Codespaces as a virtual environment in which you can quickly begin deploying resources and running commands without installing new tools on your own machines.
For the best new user experience, we recommend using an [Azure free account](htt
For this quickstart, you create a Kubernetes cluster to receive the Azure IoT Operations deployment.
-If you want to rerun this quickstart with a cluster that already has Azure IoT Operations deployed to it, refer to the steps in [Clean up resources](#clean-up-resources) to uninstall Azure IoT Operations before continuing.
+If you want to reuse a cluster that already has Azure IoT Operations deployed to it, refer to the steps in [Clean up resources](#clean-up-resources) to uninstall Azure IoT Operations before continuing.
Before you begin, prepare the following prerequisites:
Before you begin, prepare the following prerequisites:
* Visual Studio Code installed on your development machine. For more information, see [Download Visual Studio Code](https://code.visualstudio.com/download).
+* **Microsoft/Authorization/roleAssignments/write** permissions at the resource group level.
+ ## What problem will we solve? Azure IoT Operations is a suite of data services that run on Kubernetes clusters. You want these clusters to be managed remotely from the cloud, and able to securely communicate with cloud resources and endpoints. We address these concerns with the following tasks in this quickstart: 1. Create a Kubernetes cluster and connect it to Azure Arc for remote management.
-1. Create an Azure Key Vault to manage secrets for your cluster.
-1. Configure your cluster with a secrets store and service principal to communicate with cloud resources.
+1. Create a schema registry.
1. Deploy Azure IoT Operations to your cluster. ## Connect a Kubernetes cluster to Azure Arc
In this section, you create a new cluster and connect it to Azure Arc. If you wa
[!INCLUDE [prepare-codespaces](../includes/prepare-codespaces.md)]
+To connect your cluster to Azure Arc:
+
+1. In your codespace terminal, sign in to Azure CLI:
+
+ ```azurecli
+ az login
+ ```
+
+ > [!TIP]
+ > If you're using the GitHub codespace environment in a browser rather than VS Code desktop, running `az login` returns a localhost error. To fix the error, either:
+ >
+ > * Open the codespace in VS Code desktop, and then return to the browser terminal and rerun `az login`.
+ > * Or, after you get the localhost error on the browser, copy the URL from the browser and run `curl "<URL>"` in a new terminal tab. You should see a JSON response with the message "You have logged into Microsoft Azure!."
+
+1. After you sign in, Azure CLI displays all of your subscriptions and indicates your default subscription with an asterisk `*`. To continue with your default subscription, select `Enter`. Otherwise, type the number of the Azure subscription that you want to use.
+
+1. Register the required resource providers in your subscription:
+
+ >[!NOTE]
+ >This step only needs to be run once per subscription. To register resource providers, you need permission to do the `/register/action` operation, which is included in subscription Contributor and Owner roles. For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+ ```azurecli
+ az provider register -n "Microsoft.ExtendedLocation"
+ az provider register -n "Microsoft.Kubernetes"
+ az provider register -n "Microsoft.KubernetesConfiguration"
+ az provider register -n "Microsoft.IoTOperations"
+ az provider register -n "Microsoft.DeviceRegistry"
+ ```
+
+1. Use the [az group create](/cli/azure/group#az-group-create) command to create a resource group in your Azure subscription to store all the resources:
+
+ ```azurecli
+ az group create --location $LOCATION --resource-group $RESOURCE_GROUP
+ ```
+
+1. Use the [az connectedk8s connect](/cli/azure/connectedk8s#az-connectedk8s-connect) command to Arc-enable your Kubernetes cluster and manage it as part of your Azure resource group:
+
+ ```azurecli
+ az connectedk8s connect --name $CLUSTER_NAME --location $LOCATION --resource-group $RESOURCE_GROUP
+ ```
+
+ >[!TIP]
+ >The value of `$CLUSTER_NAME` is automatically set to the name of your codespace. Replace the environment variable if you want to use a different name.
+
+1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service in your tenant uses and save it as an environment variable.
+
+ ```azurecli
+ export OBJECT_ID=$(az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv)
+ ```
+
+1. Use the [az connectedk8s enable-features](/cli/azure/connectedk8s#az-connectedk8s-enable-features) command to enable custom location support on your cluster. This command uses the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses.
+
+ ```azurecli
+ az connectedk8s enable-features -n $CLUSTER_NAME -g $RESOURCE_GROUP --custom-locations-oid $OBJECT_ID --features cluster-connect custom-locations
+ ```
## Verify cluster
az iot ops verify-host
This helper command checks connectivity to Azure Resource Manager and Microsoft Container Registry endpoints.
-## Deploy Azure IoT Operations Preview
+## Create a storage account and schema registry
-In this section, you use the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command to configure your cluster so that it can communicate securely with your Azure IoT Operations components and key vault, then deploy Azure IoT Operations.
+Azure IoT Operations requires a schema registry on your cluster. Schema registry requires an Azure storage account so that it can synchronize schema information between cloud and edge.
+
+The command to create a schema registry in this section requires **Microsoft/Authorization/roleAssignments/write** permissions at the resource group level.
Run the following CLI commands in your Codespaces terminal.
-1. Create a key vault. For this scenario, use the same name and resource group as your cluster. Keyvault names have a maximum length of 24 characters, so the following command truncates the `CLUSTER_NAME`environment variable if necessary.
+1. Set environment variables for the resources you create in this section.
+
+ | Placeholder | Value |
+ | -- | -- |
+ | <STORAGE_ACCOUNT_NAME> | A name for your storage account. Storage account names must be between 3 and 24 characters in length and only contain numbers and lowercase letters. |
+ | <SCHEMA_REGISTRY_NAME> | A name for your schema registry. |
+ | <SCHEMA_REGISTRY_NAMESPACE> | A name for your schema registry namespace. The namespace uniquely identifies a schema registry within a tenant. |
+
+ ```azurecli
+ export STORAGE_ACCOUNT=<STORAGE_ACCOUNT_NAME>
+ export SCHEMA_REGISTRY=<SCHEMA_REGISTRY_NAME>
+ export SCHEMA_REGISTRY_NAMESPACE=<SCHEMA_REGISTRY_NAMESPACE>
+ ```
+
+1. Create a storage account with hierarchical namespace enabled.
```azurecli
- az keyvault create --enable-rbac-authorization false --name ${CLUSTER_NAME:0:24} --resource-group $RESOURCE_GROUP
+ az storage account create --name $STORAGE_ACCOUNT --location $LOCATION --resource-group $RESOURCE_GROUP --enable-hierarchical-namespace
```
+1. Create a schema registry that connects to your storage account. This command also creates a blob container called **schemas** in the storage account if one doesn't exist already.
+
+ ```azurecli
+ az iot ops schema registry create --name $SCHEMA_REGISTRY --resource-group $RESOURCE_GROUP --registry-namespace $SCHEMA_REGISTRY_NAMESPACE --sa-resource-id $(az storage account show --name $STORAGE_ACCOUNT -o tsv --query id)
+ ```
+
+## Deploy Azure IoT Operations Preview
+
+In this section, you configure your cluster with the dependencies for your Azure IoT Operations components, then deploy Azure IoT Operations.
+
+Run the following CLI commands in your Codespaces terminal.
+
+1. Initialize your cluster for Azure IoT Operations.
+ >[!TIP]
- > You can use an existing key vault for your secrets, but verify that the **Permission model** is set to **Vault access policy**. You can check this setting in the Azure portal in the **Access configuration** section of an existing key vault. Or use the [az keyvault show](/cli/azure/keyvault#az-keyvault-show) command to check that `enableRbacAuthorization` is false.
+ >The `init` command only needs to be run once per cluster. If you're reusing a cluster that already had Azure IoT Operations version 0.7.0 deployed on it, you can skip this step.
+
+ ```azurecli
+ az iot ops init --cluster $CLUSTER_NAME --resource-group $RESOURCE_GROUP --sr-resource-id $(az iot ops schema registry show --name $SCHEMA_REGISTRY --resource-group $RESOURCE_GROUP -o tsv --query id)
+ ```
+
+ This command might take several minutes to complete. You can watch the progress in the deployment progress display in the terminal.
1. Deploy Azure IoT Operations. This command takes several minutes to complete: ```azurecli
- az iot ops init --simulate-plc --cluster $CLUSTER_NAME --resource-group $RESOURCE_GROUP --kv-id $(az keyvault show --name ${CLUSTER_NAME:0:24} -o tsv --query id)
+ az iot ops create --cluster $CLUSTER_NAME --resource-group $RESOURCE_GROUP --name ${CLUSTER_NAME}-instance
```
- If you get an error that says *Your device is required to be managed to access your resource*, run `az login` again and make sure that you sign in interactively with a browser.
+ This command might take several minutes to complete. You can watch the progress in the deployment progress display in the terminal.
- >[!TIP]
- >If you've run `az iot ops init` before, it automatically created an app registration in Microsoft Entra ID for you. You can reuse that registration rather than creating a new one each time. To use an existing app registration, add the optional parameter `--sp-app-id <APPLICATION_CLIENT_ID>`.
+ If you get an error that says *Your device is required to be managed to access your resource*, run `az login` again and make sure that you sign in interactively with a browser.
## View resources in your cluster
-While the deployment is in progress, you can watch the resources being applied to your cluster. You can use kubectl commands to observe changes on the cluster or, since the cluster is Arc-enabled, you can use the Azure portal.
+While the deployment is in progress, the CLI progress interface shows you the deployment stage that you're in. Once the deployment is complete, you can use kubectl commands to observe changes on the cluster or, since the cluster is Arc-enabled, you can use the Azure portal.
To view the pods on your cluster, run the following command:
To view the pods on your cluster, run the following command:
kubectl get pods -n azure-iot-operations ```
-It can take several minutes for the deployment to complete. Continue running the `get pods` command to refresh your view.
- To view your resources on the Azure portal, use the following steps: 1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your Azure IoT Operations instance, or search for and select **Azure IoT Operations**. 1. Select the name of your Azure IoT Operations instance.
-1. On the **Overview** page of your instance, the **Arc extensions** table displays the resources that were deployed to your cluster.
+1. On the **Overview** page of your instance, the **Arc extensions** tab displays the resources that were deployed to your cluster.
:::image type="content" source="../get-started-end-to-end-sample/media/quickstart-deploy/view-instance.png" alt-text="Screenshot that shows the Azure IoT Operations instance on your Arc-enabled cluster." lightbox="../get-started-end-to-end-sample/media/quickstart-deploy/view-instance.png":::
In this quickstart, you configured your Arc-enabled Kubernetes cluster so that i
If you're continuing on to the next quickstart, keep all of your resources.
-If you want to delete the Azure IoT Operations deployment but want to keep your cluster, use the [az iot ops delete](/cli/azure/iot/ops#az-iot-ops-delete) command.
-
- ```azurecli
- az iot ops delete --cluster $CLUSTER_NAME --resource-group $RESOURCE_GROUP
- ```
-
-If you want to delete all of the resources you created for this quickstart, delete the Kubernetes cluster where you deployed Azure IoT Operations and remove the Azure resource group that contained the cluster.
## Next step
iot-operations Quickstart Get Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-get-insights.md
- ignite-2023 Previously updated : 08/05/2024 Last updated : 10/01/2024 #CustomerIntent: As an OT user, I want to create a visual report for my processed OPC UA data that I can use to analyze and derive insights from it.
Before you begin this quickstart, you must complete the following quickstarts:
- [Quickstart: Add OPC UA assets to your Azure IoT Operations Preview cluster](quickstart-add-assets.md) - [Quickstart: Send asset telemetry to the cloud using a dataflow](quickstart-upload-telemetry-to-cloud.md)
-You also need a Microsoft Fabric subscription. In your subscription, you need access to a **premium workspace** with **Contributor** or above permissions.
+You also need a Microsoft Fabric subscription. In your subscription, you need access to a workspace with **Contributor** or above permissions.
Additionally, your Fabric tenant must allow the creation of Real-Time Dashboards. This is a setting that can be enabled by your tenant administrator. For more information, see [Enable tenant settings in the admin portal](/fabric/real-time-intelligence/dashboard-real-time-create#enable-tenant-settings-in-the-admin-portal).
In this section, you set up a Microsoft Fabric *eventstream* to connect your eve
In this section, you create an eventstream that will be used to bring your data from Event Hubs into Microsoft Fabric Real-Time Intelligence, and eventually into a KQL database.
-Start by navigating to the [Real-Time Intelligence experience in Microsoft Fabric](https://msit.powerbi.com/home?experience=kusto).
+Start by navigating to the [Real-Time Intelligence experience in Microsoft Fabric](https://msit.powerbi.com/home?experience=kusto) and opening your Fabric workspace.
-Follow the steps in [Create an eventstream in Microsoft Fabric](/fabric/real-time-intelligence/event-streams/create-manage-an-eventstream?pivots=enhanced-capabilities) to create a new eventstream from the Real-Time Intelligence capabilities.
+Follow the steps in [Create an eventstream in Microsoft Fabric](/fabric/real-time-intelligence/event-streams/create-manage-an-eventstream?pivots=standard-capabilities#create-an-eventstream-1) to create a new eventstream resource in your workspace.
-After the eventstream is created, you'll see the main editor where you can start adding sources to the eventstream.
-
+After the eventstream is created, you'll see the main editor where you can start building the eventstream.
### Add event hub as a source Next, add your event hub from the previous quickstart as a data source for the eventstream.
-Follow the steps in [Add Azure Event Hubs source to an eventstream](/fabric/real-time-intelligence/event-streams/add-source-azure-event-hubs?pivots=enhanced-capabilities) to add the event source. Keep the following notes in mind:
-* When it's time to select a **Data format**, choose *Json* (it might be selected already by default).
-* Make sure to complete all the steps in the article through selecting **Publish** on the ribbon.
+Follow the steps in [Add Azure Event Hubs source to an eventstream](/fabric/real-time-intelligence/event-streams/add-source-azure-event-hubs?pivots=standard-capabilities#add-an-azure-event-hub-as-a-source) to add the event source. Keep the following notes in mind:
+
+- You'll be creating a new cloud connection with Shared Access Key authentication.
+ - Make sure local authentication is enabled on your event hub. You can set this from its Overview page in the Azure portal.
+- For **Consumer group**, use the default selection (*$Default*).
+- For **Data format**, choose *Json* (it might be selected already by default).
After completing this flow, the Azure event hub is visible in the eventstream live view as a source.
In this section, you create a KQL database in your Microsoft Fabric workspace to
1. Follow the steps in [Create an eventhouse](/fabric/real-time-intelligence/create-eventhouse#create-an-eventhouse-1) to create a Real-Time Intelligence eventhouse with a child KQL database. You only need to complete the section entitled **Create an eventhouse**.
-1. Next, create a KQL table in your database. Call it *OPCUA* and use the following columns.
+1. Next, create a table in your database. Call it *OPCUA* and use the following columns.
| Column name | Data type | | | |
- | Temperature | decimal |
- | Pressure | decimal |
+ | AssetId | string |
+ | Temperature | decimal |
+ | Humidity | decimal |
| Timestamp | datetime | 1. After the *OPCUA* table has been created, select it and use the **Explore your data** button to open a query window for the table.
In this section, you create a KQL database in your Microsoft Fabric workspace to
1. Run the following KQL query to create a data mapping for your table. The data mapping will be called *opcua_mapping*. ```kql
- .create table ['OPCUA'] ingestion json mapping 'opcua_mapping' '[{"column":"Temperature", "Properties":{"Path":"$.temperature.Value"}},{"column":"Pressure", "Properties":{"Path":"$.[\'Tag 10\'].Value"}},{"column":"Timestamp", "Properties":{"Path":"$[\'EventProcessedUtcTime\']"}}]'
- ```
+ .create table ['OPCUA'] ingestion json mapping 'opcua_mapping' '[{"column":"AssetId", "Properties":{"Path":"$[\'AssetId\']"}},{"column":"Temperature", "Properties":{"Path":"$.Temperature.Value"}},{"column":"Humidity", "Properties":{"Path":"$.Humidity.Value"}},{"column":"Timestamp", "Properties":{"Path":"$[\'EventProcessedUtcTime\']"}}]'
+ ```
### Add data table as a destination Next, return to your eventstream view, where you can add your new KQL table as an eventstream destination.
-Follow the steps in [Add a KQL Database destination to an eventstream](/fabric/real-time-intelligence/event-streams/add-destination-kql-database?pivots=enhanced-capabilities#direct-ingestion-mode) to add the destination. Keep the following notes in mind:
-* Use direct ingestion mode.
-* On the **Configure** step, select the *OPCUA* table that you created earlier.
-* On the **Inspect** step, open the **Advanced** options. Under **Mapping**, select **Existing mapping** and choose *opcua_mapping*.
+Follow the steps in [Add a KQL Database destination to an eventstream](/fabric/real-time-intelligence/event-streams/add-destination-kql-database?pivots=standard-capabilities#direct-ingestion-mode) to add the destination. Keep the following notes in mind:
+
+- Use direct ingestion mode.
+- On the **Configure** step, select the *OPCUA* table that you created earlier.
+- On the **Inspect** step, open the **Advanced** options. Under **Mapping**, select **Existing mapping** and choose *opcua_mapping*.
:::image type="content" source="media/quickstart-get-insights/existing-mapping.png" alt-text="Screenshot adding an existing mapping.":::
-
+ >[!TIP] >If no existing mappings are found, try refreshing the eventstream editor and restarting the steps to add the destination. Alternatively, you can initiate this same configuration process from the KQL table instead of from the eventstream, as described in [Get data from Eventstream](/fabric/real-time-intelligence/get-data-eventstream).
If you want, you can also view and query this data in your KQL database directly
## Create a Real-Time Dashboard
-In this section, you'll create a new [Real-Time Dashboard](/fabric/real-time-intelligence/dashboard-real-time-create) to visualize your quickstart data. The dashboard will automatically allow filtering by timestamp, and will display visual summaries of temperature and pressure data.
+In this section, you'll create a new [Real-Time Dashboard](/fabric/real-time-intelligence/dashboard-real-time-create) to visualize your quickstart data. The dashboard will allow filtering by asset ID and timestamp, and will display visual summaries of temperature and humidity data.
>[!NOTE] >You can only create Real-Time Dashboards if your tenant admin has enabled the creation of Real-Time Dashboards in your Fabric tenant. For more information, see [Enable tenant settings in the admin portal](/fabric/real-time-intelligence/dashboard-real-time-create#enable-tenant-settings-in-the-admin-portal).
In this section, you'll create a new [Real-Time Dashboard](/fabric/real-time-int
Follow the steps in the [Create a new dashboard](/fabric/real-time-intelligence/dashboard-real-time-create#create-a-new-dashboard) section to create a new Real-Time Dashboard from the Real-Time Intelligence capabilities.
-Then, follow the steps in the [Add data source](/fabric/real-time-intelligence/dashboard-real-time-create#add-data-source) section to add your database as a data source. Keep the following notes in mind:
-* In the **Data sources** pane, your database will be under **OneLake data hub**.
+Then, follow the steps in the [Add data source](/fabric/real-time-intelligence/dashboard-real-time-create#add-data-source) section to add your database as a data source. Keep the following note in mind:
+
+- In the **Data sources** pane, your database will be under **OneLake data hub**.
+
+### Configure parameters
+
+Next, configure some parameters for your dashboard so that the visuals can be filtered by asset ID and timestamp. The dashboard comes with a default parameter to filter by time range, so you only need to create one that can filter by asset ID.
+
+1. Switch to the **Manage** tab, and select **Parameters**. Select **+ Add** to add a new parameter.
+
+ :::image type="content" source="media/quickstart-get-insights/add-parameter.png" alt-text="Screenshot of adding a parameter to a dashboard.":::
+
+1. Create a new parameter with the following characteristics:
+ * **Label**: *Asset*
+ * **Parameter type**: *Single selection* (already selected by default)
+ * **Variable name**: *_asset*
+ * **Data type**: *string* (already selected by default)
+ * **Source**: *Query*
+ * **Data source**: Your database (already selected by default)
+ * Select **Edit query** and add the following KQL query.
+
+ ```kql
+ OPCUA
+ | summarize by AssetId
+ ```
+ * **Value column**: *AssetId*
+ * **Default value**: *Select first value of query*
+
+1. Select **Done** to save your parameter.
### Create line chart tile
-Next, add a tile to your dashboard to show a line chart of temperature and pressure over time for the selected time range.
+Next, add a tile to your dashboard to show a line chart of temperature and humidity over time for the selected asset and time range.
1. Select either **+ Add tile** or **New tile** to add a new tile. :::image type="content" source="media/quickstart-get-insights/add-tile.png" alt-text="Screenshot of adding a tile to a dashboard.":::
-1. Enter the following KQL query for the tile. This query applies a built-in filter parameter from the dashboard selector for time range, and pulls the resulting records with their timestamp, temperature, and pressure.
+1. Enter the following KQL query for the tile. This query applies filter parameters from the dashboard selectors for time range and asset, and pulls the resulting records with their timestamp, temperature, and humidity.
```kql OPCUA | where Timestamp between (_startTime.._endTime)
- | project Timestamp, Temperature, Pressure
+ | where AssetId == _asset
+ | project Timestamp, Temperature, Humidity
``` **Run** the query to verify that data can be found.
Next, add a tile to your dashboard to show a line chart of temperature and press
:::image type="content" source="media/quickstart-get-insights/chart-query.png" alt-text="Screenshot of adding a tile query."::: 1. Select **+ Add visual** next to the query results to add a visual for this data. Create a visual with the following characteristics:
- * **Tile name**: *Temperature and pressure over time*
- * **Visual type**: *Line chart*
- * **Data**:
- * **Y columns**: *Temperature (decimal)* and *Pressure (decimal)* (already inferred by default)
- * **X columns**: *Timestamp (datetime)* (already inferred by default)
- * **Y Axis**:
- * **Label**: *Units*
- * **X Axis**:
- * **Label**: *Timestamp*
+
+ - **Tile name**: *Temperature and humidity over time*
+ - **Visual type**: *Line chart*
+ - **Data**:
+ - **Y columns**: *Temperature (decimal)* and *Humidity (decimal)* (already inferred by default)
+ - **X columns**: *Timestamp (datetime)* (already inferred by default)
+ - **Y Axis**:
+ - **Label**: *Units*
+ - **X Axis**:
+ - **Label**: *Timestamp*
Select **Apply changes** to create the tile.
View the finished tile on your dashboard.
### Create max value tiles
-Next, create some tiles to display the maximum values of temperature and pressure.
+Next, create some tiles to display the maximum values of temperature and humidity.
1. Select **New tile** to create a new tile.
-1. Enter the following KQL query for the tile. This query applies a built-in filter parameter from the dashboard selector for time range, and takes the highest temperature value from the resulting records.
-
+1. Enter the following KQL query for the tile. This query applies filter parameters from the dashboard selectors for time range and asset, and takes the highest temperature value from the resulting records.
+ ```kql OPCUA | where Timestamp between (_startTime.._endTime)
+ | where AssetId == _asset
| top 1 by Temperature desc | summarize by Temperature ```
Next, create some tiles to display the maximum values of temperature and pressur
**Run** the query to verify that a maximum temperature can be found. 1. Select **+ Add visual** to add a visual for this data. Create a visual with the following characteristics:
- * **Tile name**: *Max temperature*
- * **Visual type**: *Stat*
- * **Data**:
- * **Value column**: *Temperature (decimal)* (already inferred by default)
-
+ - **Tile name**: *Max temperature*
+ - **Visual type**: *Stat*
+ - **Data**:
+ - **Value column**: *Temperature (decimal)* (already inferred by default)
+ Select **Apply changes** to create the tile.
-
+ :::image type="content" source="media/quickstart-get-insights/stat-visual.png" alt-text="Screenshot of adding a stat visual."::: 1. View the finished tile on your dashboard (you may want to resize the tile so the full text is visible).
Next, create some tiles to display the maximum values of temperature and pressur
This creates a duplicate tile on the dashboard. 1. On the new tile, select the pencil icon to edit it.
-1. Replace *Temperature* in the KQL query with *Pressure*, so that it matches the query below.
+1. Replace *Temperature* in the KQL query with *Humidity*, so that it matches the query below.
```kql OPCUA | where Timestamp between (_startTime.._endTime)
- | top 1 by Pressure desc
- | summarize by Pressure
+ | where AssetId == _asset
+ | top 1 by Humidity desc
+ | summarize by Humidity
```
- **Run** the query to verify that a maximum pressure can be found.
+ **Run** the query to verify that a maximum humidity can be found.
1. In the **Visual formatting** pane, update the following characteristics:
- * **Tile name**: *Max pressure*
- * **Data**:
- * **Value column**: *Pressure (decimal)* (already inferred by default)
+ - **Tile name**: *Max humidity*
+ - **Data**:
+ - **Value column**: *Humidity (decimal)* (already inferred by default)
Select **Apply changes**.
This completes the final step in the quickstart flow for using Azure IoT Operati
## Clean up resources
-If you're not going to continue to use this deployment, delete the Kubernetes cluster where you deployed Azure IoT Operations. In Azure, remove the Azure resource group that contains the cluster and your event hub. If you used Codespaces for these quickstarts, delete your Codespace from GitHub.
+If you're continuing on to the next quickstart, keep all of your resources.
++
+> [!NOTE]
+> The resource group contains the Event Hubs namespace you created in this quickstart.
-You can also delete your Microsoft Fabric workspace and/or all the resources within it associated with this quickstart, including the eventstream, Eventhouse, and Real-Time Dashboard.
+You can also delete your Microsoft Fabric workspace and/or all the resources within it associated with this quickstart, including the eventstream, Eventhouse, and Real-Time Dashboard.
iot-operations Quickstart Upload Telemetry To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-upload-telemetry-to-cloud.md
Before you begin this quickstart, you must complete the following quickstarts:
To use a tool such as Real-Time Dashboard to analyze your OPC UA data, you need to send the data to a cloud service such as Azure Event Hubs. A dataflow can subscribe to an MQTT topic and forward the messages to an event hub in your Azure Event Hubs namespace. The next quickstart shows you how to use Real-Time Dashboards to visualize and analyze your data.
+## Set your environment variables
+
+If you're using the Codespaces environment, the required environment variables are already set and you can skip this step. Otherwise, set the following environment variables in your shell:
+
+# [Bash](#tab/bash)
+
+```bash
+# The name of the resource group where your Kubernetes cluster is deployed
+RESOURCE_GROUP=<resource-group-name>
+
+# The name of your Kubernetes cluster
+CLUSTER_NAME=<kubernetes-cluster-name>
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+# The name of the resource group where your Kubernetes cluster is deployed
+$RESOURCE_GROUP = "<resource-group-name>"
+
+# The name of your Kubernetes cluster
+$CLUSTER_NAME = "<kubernetes-cluster-name>"
+```
+++ ## Create an Event Hubs namespace
-To create an Event Hubs namespace and an event hub, run the following Azure CLI commands in your Codespaces terminal. These commands create the Event Hubs namespace in the same resource group as your Kubernetes cluster:
+To create an Event Hubs namespace and an event hub, run the following Azure CLI commands in your shell. These commands create the Event Hubs namespace in the same resource group as your Kubernetes cluster:
-```azurecli
-az eventhubs namespace create --name ${CLUSTER_NAME:0:24} --resource-group $RESOURCE_GROUP --location $LOCATION
+# [Bash](#tab/bash)
+
+```bash
+az eventhubs namespace create --name ${CLUSTER_NAME:0:24} --resource-group $RESOURCE_GROUP --disable-local-auth true
az eventhubs eventhub create --name destinationeh --resource-group $RESOURCE_GROUP --namespace-name ${CLUSTER_NAME:0:24} --retention-time 1 --partition-count 1 --cleanup-policy Delete ```
+# [PowerShell](#tab/powershell)
+
+```powershell
+az eventhubs namespace create --name $CLUSTER_NAME.Substring(0, [MATH]::Min($CLUSTER_NAME.Length, 24)) --resource-group $RESOURCE_GROUP --disable-local-auth true
+
+az eventhubs eventhub create --name destinationeh --resource-group $RESOURCE_GROUP --namespace-name $CLUSTER_NAME.Substring(0, [MATH]::Min($CLUSTER_NAME.Length, 24)) --retention-time 1 --partition-count 1 --cleanup-policy Delete
+```
+++ To grant the Azure IoT Operations extension in your cluster access to your Event Hubs namespace, run the following Azure CLI commands:
+# [Bash](#tab/bash)
+ ```bash
-# AIO Arc extension name
-AIO_EXTENSION_NAME=$(az k8s-extension list --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME --cluster-type connectedClusters -o tsv --query "[?extensionType=='microsoft.iotoperations'].name")
-
-az deployment group create \
- --name assign-RBAC-roles \
- --resource-group $RESOURCE_GROUP \
- --template-file samples/quickstarts/event-hubs-config.bicep \
- --parameters aioExtensionName=$AIO_EXTENSION_NAME \
- --parameters clusterName=$CLUSTER_NAME \
- --parameters eventHubNamespaceName=${CLUSTER_NAME:0:24}
+EVENTHUBRESOURCE=$(az eventhubs namespace show --resource-group $RESOURCE_GROUP --namespace-name ${CLUSTER_NAME:0:24} --query id -o tsv)
+
+PRINCIPAL=$(az k8s-extension list --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME --cluster-type connectedClusters -o tsv --query "[?extensionType=='microsoft.iotoperations'].identity.principalId")
+
+az role assignment create --role "Azure Event Hubs Data Sender" --assignee $PRINCIPAL --scope $EVENTHUBRESOURCE
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$EVENTHUBRESOURCE = $(az eventhubs namespace show --resource-group $RESOURCE_GROUP --namespace-name $CLUSTER_NAME.Substring(0, [MATH]::Min($CLUSTER_NAME.Length, 24)) --query id -o tsv)
+
+$PRINCIPAL = $(az k8s-extension list --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME --cluster-type connectedClusters -o tsv --query "[?extensionType=='microsoft.iotoperations'].identity.principalId")
+
+az role assignment create --role "Azure Event Hubs Data Sender" --assignee $PRINCIPAL --scope $EVENTHUBRESOURCE
``` ++ ## Create a dataflow to send telemetry to an event hub
-To create and configure a dataflow in your cluster, run the following commands in your Codespaces terminal. This dataflow forwards messages from the MQTT topic to the event hub you created without making any changes:
+To create and configure a dataflow in your cluster, run the following commands in your shell. This dataflow:
+
+- Renames the `Tag 10` field in the incoming message to `Humidity`.
+- Renames the `temperature` field in the incoming message to `Temperature`.
+- Adds a field called `AssetId` that contains the value of the `externalAssetId` message property.
+- Forwards the transformed messages from the MQTT topic to the event hub you created.
+
+<!-- TODO: Change branch to main before merging the release branch -->
+
+# [Bash](#tab/bash)
```bash
-sed 's/<NAMESPACE>/'"${CLUSTER_NAME:0:24}"'/' samples/quickstarts/dataflow.yaml > dataflow.yaml
+wget https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/dataflow.yaml
+sed -i 's/<NAMESPACE>/'"${CLUSTER_NAME:0:24}"'/' dataflow.yaml
kubectl apply -f dataflow.yaml ```
+# [PowerShell](#tab/powershell)
+
+```powershell
+Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/dataflow.yaml -OutFile dataflow.yaml
+
+(Get-Content dataflow.yaml) | ForEach-Object { $_ -replace '<NAMESPACE>', $CLUSTER_NAME.Substring(0, [MATH]::Min($CLUSTER_NAME.Length, 24)) } | Set-Content dataflow.yaml
+
+kubectl apply -f dataflow.yaml
+```
+++
+## Verify data is flowing
+
+To verify that data is flowing to the cloud, you can view your Event Hubs instance in the Azure portal.
+
+If messages are flowing to the instance, you can see the count on incoming messages on the instance **Overview** page:
++
+If messages are flowing, you can use the **Data Explorer** to view the messages:
++
+> [!TIP]
+> You may need to assign yourself to the **Azure Event Hubs Data Receiver** role for the Event Hubs namespace to view the messages.
+ ## How did we solve the problem? In this quickstart, you used a dataflow to connect an MQTT topic to an event hub in your Azure Event Hubs namespace. In the next quickstart, you use Microsoft Fabric Real-Time Intelligence to visualize the data. ## Clean up resources
-If you're not going to continue to use this deployment, delete the Kubernetes cluster where you deployed Azure IoT Operations and remove the Azure resource group that contains the cluster.
+If you're continuing on to the next quickstart, keep all of your resources.
+
-You can also delete your Microsoft Fabric workspace.
+> [!NOTE]
+> The resource group contains the Event Hubs namespace you created in this quickstart.
## Next step
iot-operations Concept Iot Operations In Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/concept-iot-operations-in-layered-network.md
In the pictured example, Azure IoT Operations is deployed to level 2 through 4.
With extra configurations, the Layered Network Management service can also direct east-west traffic. This route enables Azure IoT Operations components to send data to other components at upper level and form data pipelines from the bottom layer to the cloud. In a multi-layer network, the Azure IoT Operations components can be deployed across layers based on your architecture and dataflow needs. This example provides some general ideas of where individual components will be placed.-- The **OPC UA Broker** may locate at the lower layer that is closer to your assets and OPC UA servers. This is also true for the **Akri** agent.
+- The **connector for OPC UA** may locate at the lower layer that is closer to your assets and OPC UA servers.
- The data shall be transferred towards the cloud side through the **MQ** components in each layer. - The **Data Processor** is generally placed at the top layer as the most likely layer to have significant compute capacity and as a final stop for the data to get prepared before being sent to the cloud.
iot-operations Howto Configure L3 Cluster Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l3-cluster-layered-network.md
login.microsoftonline.com. 0 IN A 100.104.0.165
az provider register -n "Microsoft.ExtendedLocation" az provider register -n "Microsoft.Kubernetes" az provider register -n "Microsoft.KubernetesConfiguration"
- az provider register -n "Microsoft.IoTOperationsOrchestrator"
az provider register -n "Microsoft.IoTOperations" az provider register -n "Microsoft.DeviceRegistry" ```
login.microsoftonline.com. 0 IN A 100.104.0.165
> These steps are for AKS Edge Essentials only. After you've deployed Azure IoT Operations to your cluster, enable inbound connections to MQTT broker and configure port forwarding:
-1. Enable a firewall rule for port 8883:
+1. Enable a firewall rule for port 18883:
```powershell
- New-NetFirewallRule -DisplayName "MQTT broker" -Direction Inbound -Protocol TCP -LocalPort 8883 -Action Allow
+ New-NetFirewallRule -DisplayName "MQTT broker" -Direction Inbound -Protocol TCP -LocalPort 18883 -Action Allow
```
-1. Run the following command and make a note of the IP address for the service called `aio-mq-dmqtt-frontend`:
+1. Run the following command and make a note of the IP address for the service called `aio-broker`:
```cmd
- kubectl get svc aio-mq-dmqtt-frontend -n azure-iot-operations -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
+ kubectl get svc aio-broker -n azure-iot-operations -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
```
-1. Enable port forwarding for port 8883. Replace `<aio-mq-dmqtt-frontend IP address>` with the IP address you noted in the previous step:
+1. Enable port forwarding for port 18883. Replace `<aio-broker IP address>` with the IP address you noted in the previous step:
```cmd
- netsh interface portproxy add v4tov4 listenport=8883 listenaddress=0.0.0.0 connectport=8883 connectaddress=<aio-mq-dmqtt-frontend IP address>
+ netsh interface portproxy add v4tov4 listenport=18883 listenaddress=0.0.0.0 connectport=18883 connectaddress=<aio-broker IP address>
``` ## Related content
iot-operations Howto Configure L4 Cluster Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l4-cluster-layered-network.md
The following steps for setting up [AKS Edge Essentials](/azure/aks/hybrid/aks-e
az provider register -n "Microsoft.ExtendedLocation" az provider register -n "Microsoft.Kubernetes" az provider register -n "Microsoft.KubernetesConfiguration"
- az provider register -n "Microsoft.IoTOperationsOrchestrator"
az provider register -n "Microsoft.IoTOperations" az provider register -n "Microsoft.DeviceRegistry" ```
iot-operations Howto Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-authentication.md
- ignite-2023 Previously updated : 08/02/2024 Last updated : 08/29/2024 #CustomerIntent: As an operator, I want to configure authentication so that I have secure MQTT broker communications.
Last updated 08/02/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-MQTT broker supports multiple authentication methods for clients, and you can configure each listener to have its own authentication system with *BrokerAuthentication* resources.
+MQTT broker supports multiple authentication methods for clients, and you can configure each listener to have its own authentication system with *BrokerAuthentication* resources. For a list of the available settings, see the [Broker Authentication](/rest/api/iotoperationsmq/broker-authentication) API reference.
+
+## Link BrokerListener and BrokerAuthentication
+
+The following rules apply to the relationship between BrokerListener and *BrokerAuthentication*:
+
+* Each BrokerListener can have multiple ports. Each port can be linked to a *BrokerAuthentication* resource.
+* Each *BrokerAuthentication* can support multiple authentication methods at once.
+
+To link a BrokerListener to a *BrokerAuthentication* resource, specify the `authenticationRef` field in the `ports` setting of the BrokerListener resource. To learn more, see [BrokerListener resource](./howto-configure-brokerlistener.md).
## Default BrokerAuthentication resource
-Azure IoT Operations Preview deploys a default BrokerAuthentication resource named `authn` linked with the default listener named `listener` in the `azure-iot-operations` namespace. It's configured to only use Kubernetes Service Account Tokens (SATs) for authentication. To inspect it, run:
+Azure IoT Operations Preview deploys a default *BrokerAuthentication* resource named `authn` linked with the default listener named `listener` in the `azure-iot-operations` namespace. It's configured to only use Kubernetes Service Account Tokens (SATs) for authentication. To inspect it, run:
```bash kubectl get brokerauthentication authn -n azure-iot-operations -o yaml ```
-The output shows the default BrokerAuthentication resource, with metadata removed for brevity:
+The output shows the default *BrokerAuthentication* resource, with metadata removed for brevity:
```yaml apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
metadata:
namespace: azure-iot-operations spec: authenticationMethods:
- - method: ServiceAccountToken
- serviceAccountToken:
- audiences:
- - aio-mq
+ - method: ServiceAccountToken
+ serviceAccountTokenSettings:
+ audiences:
+ - "aio-internal"
```
-To change the configuration, modify the `authenticationMethods` setting in this BrokerAuthentication resource or create new brand new BrokerAuthentication resource with a different name. Then, deploy it using `kubectl apply`.
-
-## Relationship between BrokerListener and BrokerAuthentication
-
-The following rules apply to the relationship between BrokerListener and BrokerAuthentication:
+> [!IMPORTANT]
+> The service account token (SAT) authentication method in the default *BrokerAuthentication* resource is required for components in the Azure IoT Operations to function correctly. Avoid updating or deleting the default *BrokerAuthentication* resource. If you need to make changes, modify the `authenticationMethods` field in this resource while retaining the SAT authentication method with the `aio-internal` audience. Preferably, you can create a new *BrokerAuthentication* resource with a different name and deploy it using `kubectl apply`.
-* Each BrokerListener can have multiple ports. Each port can be linked to a BrokerAuthentication resource.
-* Each BrokerAuthentication can support multiple authentication methods at once
+To change the configuration, modify the `authenticationMethods` setting in this *BrokerAuthentication* resource or create new brand new *BrokerAuthentication* resource with a different name. Then, deploy it using `kubectl apply`.
## Authentication flow
metadata:
spec: authenticationMethods: - method: Custom
- custom:
+ customSettings:
# ... - method: ServiceAccountToken
- serviceAccountToken:
+ serviceAccountTokenSettings:
# ...
- - method: x509Credentials
- x509Credentials:
+ - method: X509
+ x509Settings:
# ... ```
For testing, you can disable authentication by omitting `authenticationRef` in t
## Configure authentication method
-To learn more about each of the authentication options, see the following sections:
-
-## X.509 client certificate
-
-### Prerequisites
+To learn more about each of the authentication options, see the next sections for each method.
-- MQTT broker configured with [TLS enabled](howto-configure-brokerlistener.md).-- [Step-CLI](https://smallstep.com/docs/step-cli/installation/)-- Client certificates and the issuing certificate chain in PEM files. If you don't have any, use Step CLI to generate some.-- Familiarity with public key cryptography and terms like root CA, private key, and intermediate certificates.
+For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
-Both EC and RSA keys are supported, but all certificates in the chain must use the same key algorithm. If you're importing your own CA certificates, ensure that the client certificate uses the same key algorithm as the CAs.
+## X.509
-### Import trusted client root CA certificate
-
-A trusted root CA certificate is required to validate the client certificate. To import a root certificate that can be used to validate client certificates, first import the certificate PEM as ConfigMap under the key `client_ca.pem`. Client certificates must be rooted in this CA for MQTT broker to authenticate them.
+A trusted root CA certificate is required to validate the client certificate. Client certificates must be rooted in this CA for MQTT broker to authenticate them. Both EC and RSA keys are supported, but all certificates in the chain must use the same key algorithm. If you're importing your own CA certificates, ensure that the client certificate uses the same key algorithm as the CAs. To import a root certificate that can be used to validate client certificates, import the certificate PEM as *ConfigMap* under the key `client_ca.pem`. For example:
```bash kubectl create configmap client-ca --from-file=client_ca.pem -n azure-iot-operations
kubectl create configmap client-ca --from-file=client_ca.pem -n azure-iot-operat
To check the root CA certificate is properly imported, run `kubectl describe configmap`. The result shows the same base64 encoding of the PEM certificate file.
-```console
-$ kubectl describe configmap client-ca -n azure-iot-operations
+```bash
+kubectl describe configmap client-ca -n azure-iot-operations
+```
+
+```Output
Name: client-ca Namespace: azure-iot-operations
Data
client_ca.pem: - --BEGIN CERTIFICATE--
-MIIBmzCCAUGgAwIBAgIQVAZ2I0ydpCut1imrk+fM3DAKBggqhkjOPQQDAjAsMRAw
-...
-t2xMXcOTeYiv2wnTq0Op0ERKICHhko2PyCGGwnB2Gg==
+<Certificate>
--END CERTIFICATE--
BinaryData
==== ```
-### Certificate attributes
+Once the trusted client root CA certificate and the certificate-to-attribute mapping are imported, enable X.509 client authentication by adding it as one of the authentication methods as part of a *BrokerAuthentication* resource linked to a TLS-enabled listener. For example:
+
+```yaml
+spec:
+ authenticationMethods:
+ - method: X509
+ x509Settings:
+ trustedClientCaCert: client-ca
+ authorizationAttributes:
+ # ...
+```
+
+### Certificate attributes for authorization
-X509 attributes can be specified in the *BrokerAuthentication* resource. For example, every client that has a certificate issued by the root CA `CN = Contoso Root CA Cert, OU = Engineering, C = US` or an intermediate CA `CN = Contoso Intermediate CA` receives the attributes listed.
+X.509 attributes can be specified in the *BrokerAuthentication* resource, and they're used to authorize clients based on their certificate properties. The attributes are defined in the `authorizationAttributes` field. For example:
```yaml
-apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
-kind: BrokerAuthentication
-metadata:
- name: authn
- namespace: azure-iot-operations
spec: authenticationMethods:
- - method: x509Credentials
- x509Credentials:
+ - method: X509
+ x509Settings:
authorizationAttributes: root: subject = "CN = Contoso Root CA Cert, OU = Engineering, C = US"
In this example, every client that has a certificate issued by the root CA `CN =
The matching for attributes always starts from the leaf client certificate and then goes along the chain. The attribute assignment stops after the first match. In previous example, even if `smart-fan` has the intermediate certificate `CN = Contoso Intermediate CA`, it doesn't get the associated attributes.
-Authorization rules can be applied to clients using X.509 certificates with these attributes.
-
-### Enable X.509 client authentication
-
-Finally, once the trusted client root CA certificate and the certificate-to-attribute mapping are imported, enable X.509 client authentication by adding `x509` as one of the authentication methods as part of a BrokerAuthentication resource linked to a TLS-enabled listener. For example:
-
-```yaml
-spec:
- authenticationMethods:
- - method: x509Credentials
- x509Credentials:
- trustedClientCaCert: client-ca
- attributes:
- secretName: x509-attributes
-```
+Authorization rules can be applied to clients using X.509 certificates with these attributes. To learn more, see [Authorize clients that use X.509 authentication](./howto-configure-authorization.md).
### Connect mosquitto client to MQTT broker with X.509 client certificate
Clients authentication via SAT can optionally have their SATs annotated with att
### Enable Service Account Token (SAT) authentication
-Modify the `authenticationMethods` setting in a BrokerAuthentication resource to specify `ServiceAccountToken` as a valid authentication method. The `audiences` specifies the list of valid audiences for tokens. Choose unique values that identify the MQTT broker service. You must specify at least one audience, and all SATs must match one of the specified audiences.
+Modify the `authenticationMethods` setting in a *BrokerAuthentication* resource to specify `ServiceAccountToken` as a valid authentication method. The `audiences` specifies the list of valid audiences for tokens. Choose unique values that identify the MQTT broker service. You must specify at least one audience, and all SATs must match one of the specified audiences.
```yaml spec: authenticationMethods: - method: ServiceAccountToken
- serviceAccountToken:
+ serviceAccountTokenSettings:
audiences:
- - aio-mq
- - my-audience
+ - "aio-internal"
+ - "my-audience"
``` Apply your changes with `kubectl apply`. It might take a few minutes for the changes to take effect.
spec:
expirationSeconds: 86400 ```
-Here, the `serviceAccountName` field in the pod configuration must match the service account associated with the token being used. Also, The `serviceAccountToken.audience` field in the pod configuration must be one of the `audiences` configured in the BrokerAuthentication resource.
+Here, the `serviceAccountName` field in the pod configuration must match the service account associated with the token being used. Also, The `serviceAccountToken.audience` field in the pod configuration must be one of the `audiences` configured in the *BrokerAuthentication* resource.
Once the pod is created, start a shell in the pod:
kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh
Inside the pod's shell, run the following command to publish a message to the broker: ```bash
-mosquitto_pub --host aio-mq-dmqtt-frontend --port 8883 --message "hello" --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+mosquitto_pub --host aio-broker --port 18883 --message "hello" --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
``` The output should look similar to the following:
The custom authentication server must present a server certificate, and MQTT bro
### Enable custom authentication for a listener
-Modify the `authenticationMethods` setting in a BrokerAuthentication resource to specify `custom` as a valid authentication method. Then, specify the parameters required to communicate with a custom authentication server.
+Modify the `authenticationMethods` setting in a *BrokerAuthentication* resource to specify `Custom` as a valid authentication method. Then, specify the parameters required to communicate with a custom authentication server.
This example shows all possible parameters. The exact parameters required depend on each custom server's requirements. ```yaml spec: authenticationMethods:
- - custom:
+ - method: Custom
+ customSettings:
# Endpoint for custom authentication requests. Required. endpoint: https://auth-server-template
- # Trusted CA certificate for validating custom authentication server certificate.
- # Required unless the server certificate is publicly-rooted.
- caCert: custom-auth-ca
+ # Optional CA certificate for validating the custom authentication server's certificate.
+ caCertConfigMap: custom-auth-ca
# Authentication between MQTT broker with the custom authentication server. # The broker may present X.509 credentials or no credentials to the server. auth:
iot-operations Howto Configure Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-authorization.md
- ignite-2023 Previously updated : 07/15/2024 Last updated : 09/09/2024 #CustomerIntent: As an operator, I want to configure authorization so that I have secure MQTT broker communications.
Last updated 07/15/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-Authorization policies determine what actions the clients can perform on the broker, such as connecting, publishing, or subscribing to topics. Configure MQTT broker to use one or multiple authorization policies with the *BrokerAuthorization* resource.
+Authorization policies determine what actions the clients can perform on the broker, such as connecting, publishing, or subscribing to topics. Configure MQTT broker to use one or multiple authorization policies with the *BrokerAuthorization* resource. Each *BrokerAuthorization* resource contains a list of rules that specify the principals and resources for the authorization policies.
-You can set to one *BrokerAuthorization* for each listener. Each *BrokerAuthorization* resource contains a list of rules that specify the principals and resources for the authorization policies.
+## Link BrokerAuthorization to BrokerListener
+
+To link a *BrokerListener* to a *BrokerAuthorization* resource, specify the `authenticationRef` field in the `ports` setting of the *BrokerListener* resource. Similar to BrokerAuthentication, the *BrokerAuthorization* resource can be linked to multiple *BrokerListener* ports. The authorization policies apply to all linked listener ports. However, there's one key difference compared with BrokerAuthentication:
> [!IMPORTANT]
-> To have the *BrokerAuthorization* configuration apply to a listener, at least one *BrokerAuthentication* must also be linked to that listener.
+> To have the *BrokerAuthorization* configuration apply to a listener port, at least one BrokerAuthentication must also be linked to that listener port.
-## Configure BrokerAuthorization for listeners
+To learn more about *BrokerListener*, see [BrokerListener resource](howto-configure-brokerlistener.md).
-The specification of a *BrokerAuthorization* resource has the following fields:
+## Authorization rules
-| Field Name | Required | Description |
-| | | |
-| authorizationPolicies | Yes | This field defines the settings for the authorization policies, such as *enableCache* and *rules*.|
-| enableCache | No | Whether to enable caching for the authorization policies. If set to `true`, the broker caches the authorization results for each client and topic combination to improve performance and reduce latency. If set to `false`, the broker evaluates the authorization policies for each client and topic request, to ensure consistency and accuracy. This field is optional and defaults to `false`. |
-| rules | No | A list of rules that specify the principals and resources for the authorization policies. Each rule has these subfields: *principals* and *brokerResources*. |
-| principals | Yes | This subfield defines the identities that represent the clients, such as *usernames*, *clientids*, and *attributes*.|
-| usernames | No | A list of usernames that match the clients. The usernames are case-sensitive and must match the usernames provided by the clients during authentication. |
-| clientids | No | A list of client IDs that match the clients. The client IDs are case-sensitive and must match the client IDs provided by the clients during connection. |
-| attributes | No | A list of key-value pairs that match the attributes of the clients. The attributes are case-sensitive and must match the attributes provided by the clients during authentication. |
-| brokerResources | Yes | This subfield defines the objects that represent the actions or topics, such as *method* and *topics*. |
-| method | Yes | The type of action that the clients can perform on the broker. This subfield is required and can be one of these values: **Connect**: This value indicates that the clients can connect to the broker. **Publish**: This value indicates that the clients can publish messages to topics on the broker. **Subscribe**: This value indicates that the clients can subscribe to topics on the broker. |
-| topics | No | A list of topics or topic patterns that match the topics that the clients can publish or subscribe to. This subfield is required if the method is *Subscribe* or *Publish*. |
+To configure authorization, create a *BrokerAuthorization* resource in your Kubernetes cluster. The following sections provide examples of how to configure authorization for clients that use usernames, attributes, X.509 certificates, and Kubernetes Service Account Tokens (SATs). For a list of the available settings, see the [Broker Authorization](/rest/api/iotoperationsmq/broker-authorization) API reference.
-The following example shows how to create a *BrokerAuthorization* resource that defines the authorization policies for a listener named *my-listener*.
+The following example shows how to create a *BrokerAuthorization* resource using both usernames and attributes:
```yaml apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
metadata:
namespace: azure-iot-operations spec: authorizationPolicies:
- enableCache: true
+ cache: Enabled
rules: - principals: usernames:
- - temperature-sensor
- - humidity-sensor
+ - "temperature-sensor"
+ - "humidity-sensor"
attributes: - city: "seattle" organization: "contoso"
This broker authorization allows clients with usernames `temperature-sensor` or
To create this *BrokerAuthorization* resource, apply the YAML manifest to your Kubernetes cluster.
+### Further limit access based on client ID
+
+Because the `principals` field is a logical OR, you can further restrict access based on client ID by adding the `clientIds` field to the `brokerResources` field. For example, to allow clients with client IDs that start with its building number to connect and publish telemetry to topics scoped with their building, use the following configuration:
+
+```yaml
+apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
+kind: BrokerAuthorization
+metadata:
+ name: "my-authz-policies"
+ namespace: azure-iot-operations
+spec:
+ authorizationPolicies:
+ cache: Enabled
+ rules:
+ - principals:
+ attributes:
+ - building: "building22"
+ - building: "building23"
+ brokerResources:
+ - method: Connect
+ clientIds:
+ - "{principal.attributes.building}*" # client IDs must start with building22
+ - method: Publish
+ topics:
+ - "sensors/{principal.attributes.building}/{principal.clientId}/telemetry"
+```
+
+Here, if the `clientIds` were not set under the `Connect` method, a client with any client ID could connect as long as it had the `building` attribute set to `building22` or `building23`. By adding the `clientIds` field, only clients with client IDs that start with `building22` or `building23` can connect. This ensures not only that the client has the correct attribute but also that the client ID matches the expected pattern.
+ ## Authorize clients that use X.509 authentication Clients that use [X.509 certificates for authentication](./howto-configure-authentication.md) can be authorized to access resources based on X.509 properties present on their certificate or their issuing certificates up the chain. ### Using attributes
-To create rules based on properties from a client's certificate, its root CA, or intermediate CA, define the X.509 attributes in the *BrokerAuthorization* resource. For more information, see [Certificate attributes](howto-configure-authentication.md#certificate-attributes).
+To create rules based on properties from a client's certificate, its root CA, or intermediate CA, define the X.509 attributes in the *BrokerAuthorization* resource. For more information, see [Certificate attributes](howto-configure-authentication.md#certificate-attributes-for-authorization).
### With client certificate subject common name as username
For example, if a client has a certificate with subject `CN = smart-lock`, its u
Authorization attributes for SATs are set as part of the Service Account annotations. For example, to add an authorization attribute named `group` with value `authz-sat`, run the command: ```bash
-kubectl annotate serviceaccount mqtt-client aio-mq-broker-auth/group=authz-sat
+kubectl annotate serviceaccount mqtt-client aio-broker-auth/group=authz-sat
```
-Attribute annotations must begin with `aio-mq-broker-auth/` to distinguish them from other annotations.
+Attribute annotations must begin with `aio-broker-auth/` to distinguish them from other annotations.
As the application has an authorization attribute called `authz-sat`, there's no need to provide a `clientId` or `username`. The corresponding *BrokerAuthorization* resource uses this attribute as a principal, for example:
spec:
- "odd-numbered-orders" - method: Subscribe topics:
- - "orders"
+ - "orders"
``` To learn more with an example, see [Set up Authorization Policy with Dapr Client](../create-edge-apps/howto-develop-dapr-apps.md).
kubectl edit brokerauthorization my-authz-policies
## Disable authorization
-To disable authorization, set `authorizationEnabled: false` in the BrokerListener resource. When the policy is set to allow all clients, all [authenticated clients](./howto-configure-authentication.md) can access all operations.
-
-```yaml
-apiVersion: mqttbroker.iotoperations.azure.com/v1beta1
-kind: BrokerListener
-metadata:
- name: "my-listener"
- namespace: azure-iot-operations
-spec:
- brokerRef: "my-broker"
- authenticationEnabled: false
- authorizationEnabled: false
- port: 1883
-```
+To disable authorization, omit `authorizationRef` in the `ports` setting of a *BrokerListener* resource.
## Unauthorized publish in MQTT 3.1.1
iot-operations Howto Configure Availability Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-availability-scale.md
- ignite-2023 Previously updated : 07/11/2024 Last updated : 09/09/2024 #CustomerIntent: As an operator, I want to understand the settings for the MQTT broker so that I can configure it for high availability and scale.
Last updated 07/11/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-The **Broker** resource is the main resource that defines the overall settings for MQTT broker. It also determines the number and type of pods that run the *Broker* configuration, such as the frontends and the backends. You can also use the *Broker* resource to configure its memory profile. Self-healing mechanisms are built in to the broker and it can often automatically recover from component failures. For example, a node fails in a Kubernetes cluster configured for high availability.
+The *Broker* resource is the main resource that defines the overall settings for MQTT broker. It also determines the number and type of pods that run the *Broker* configuration, such as the frontends and the backends. You can also use the *Broker* resource to configure its memory profile. Self-healing mechanisms are built in to the broker and it can often automatically recover from component failures. For example, a node fails in a Kubernetes cluster configured for high availability.
You can horizontally scale the MQTT broker by adding more frontend replicas and backend chains. The frontend replicas are responsible for accepting MQTT connections from clients and forwarding them to the backend chains. The backend chains are responsible for storing and delivering messages to the clients. The frontend pods distribute message traffic across the backend pods, and the backend redundancy factor determines the number of data copies to provide resiliency against node failures in the cluster.
+For a list of the available settings, see the [Broker](/rest/api/iotoperationsmq/broker) API reference.
+ ## Configure scaling settings > [!IMPORTANT] > At this time, the *Broker* resource can only be configured at initial deployment time using the Azure CLI, Portal or GitHub Action. A new deployment is required if *Broker* configuration changes are needed.
-To configure the scaling settings MQTT broker, you need to specify the `mode` and `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
-
-The `mode` field can be one of these values:
--- `auto`: This value indicates that MQTT broker operator automatically deploys the appropriate number of pods based on the cluster hardware. The default value is *auto* and used for most scenarios.-- `distributed`: This value indicates that you can manually specify the number of frontend pods and backend chains in the `cardinality` field. This option gives you more control over the deployment, but requires more configuration.
+To configure the scaling settings MQTT broker, you need to specify the `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
The `cardinality` field is a nested field that has these subfields:
The `cardinality` field is a nested field that has these subfields:
- `partitions`: The number of partitions to deploy. This subfield is required if the `mode` field is set to `distributed`. - `workers`: The number of workers to deploy per backend, currently it must be set to `1`. This subfield is required if the `mode` field is set to `distributed`.
+If `cardinality` field is omitted, cardinality is determined by MQTT broker operator automatically deploys the appropriate number of pods based on the cluster hardware.
+
+To configure the scaling settings MQTT broker, you need to specify the `mode` and `cardinality` fields in the specification of the *Broker* custom resource. For more information on setting the mode and cardinality settings using Azure CLI, see [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init).
+ ## Configure memory profile > [!IMPORTANT]
metadata:
namespace: azure-iot-operations spec: memoryProfile: medium
- mode: distributed
cardinality: backendChain: partitions: 2
kubectl apply -f <path-to-yaml-file>
## Configure MQTT broker advanced settings
-The following table lists the properties of the broker advanced settings that include client configurations, encryption of internal traffic, certificate rotation, and node tolerations.
-
-| Name | Type | Description |
-|-|--|--|
-| clients | ClientConfig | Configurations related to all clients |
-| clients.maxKeepAliveSeconds | `integer` | Upper bound of a client's keep alive, in seconds |
-| clients.maxMessageExpirySeconds | `integer` | Upper bound of message expiry interval, in seconds |
-| clients.maxReceiveMaximum | `integer` | Upper bound of receive maximum that a client can request in the CONNECT packet |
-| clients.maxSessionExpirySeconds | `integer` | Upper bound of session expiry interval, in seconds |
-| clients.subscriberQueueLimit | `SubscriberQueueLimit` | The limit on the number of queued messages for a subscriber |
-| clients.subscriberQueueLimit.length | `integer` | The maximum length of the queue before messages are dropped |
-| clients.subscriberQueueLimit.strategy | `SubscriberMessageDropStrategy` | The strategy for dropping messages from the queue |
-| clients.subscriberQueueLimit.strategy.DropOldest | `string` | The oldest message is dropped |
-| clients.subscriberQueueLimit.strategy.None | `string` | Messages are never dropped |
-| encryptInternalTraffic | Encrypt | The setting to enable or disable encryption of internal traffic |
-| encryptInternalTraffic.Disabled | `string` | Disable internal traffic encryption |
-| encryptInternalTraffic.Enabled | `string` | Enable internal traffic encryption |
-| internalCerts | CertManagerCertOptions | Certificate rotation and private key configuration |
-| internalCerts.duration | `string` | Lifetime of certificate. Must be specified using a *Go* *time.Duration* format (h, m, s). For example, 240h for 240 hours and 45m for 45 minutes. |
-| internalCerts.privateKey | `CertManagerPrivateKey` | Configuration of certificate private key |
-| internalCerts.renewBefore | `string` | Duration before renewing a certificate. Must be specified using a *Go* *time.Duration* format (h, m, s). For example, 240h for 240 hours and 45m for 45 minutes. |
-| internalCerts.privateKey.algorithm | PrivateKeyAlgorithm | Algorithm for private key |
-| internalCerts.privateKey.rotationPolicy | PrivateKeyRotationPolicy | Cert-manager private key rotation policy |
-| internalCerts.privateKey.algorithm.Ec256 | `string`| Algorithm - EC256 |
-| internalCerts.privateKey.algorithm.Ec384 | `string`| Algorithm - EC384 |
-| internalCerts.privateKey.algorithm.Ec521 | `string`| Algorithm - EC521 |
-| internalCerts.privateKey.algorithm.Ed25519 | `string`| Algorithm - Ed25519|
-| internalCerts.privateKey.algorithm.Rsa2048 | `string`| Algorithm - RSA2048|
-| internalCerts.privateKey.algorithm.Rsa4096 | `string`| Algorithm - RSA4096|
-| internalCerts.privateKey.algorithm.Rsa8192 | `string`| Algorithm - RSA8192|
-| internalCerts.privateKey.rotationPolicy.Always | `string`| Always rotate key |
-| internalCerts.privateKey.rotationPolicy.Never | `string`| Never rotate key |
-| tolerations | NodeTolerations | The details of tolerations that are applied to all *Broker* pods |
-| tolerations.effect | `string` | Toleration effect |
-| tolerations.key | `string` | Toleration key |
-| tolerations.operator | `TolerationOperator` | Toleration operator. For example, "Exists" or "Equal". |
-| tolerations.value | `string` | Toleration value |
-| tolerations.operator.Equal | `string` | Equal operator |
-| tolerations.operator.Exists | `string` | Exists operator |
+The broker advanced settings include client configurations, encryption of internal traffic, and certificate rotations. For more information on the advanced settings, see the [Broker]() API reference.
Here's an example of a *Broker* with advanced settings:
spec:
privateKey: algorithm: Rsa2048 rotationPolicy: Always
- tolerations:
- effect: string
- key: string
- operator: Equal
- value: string
``` ## Configure MQTT broker diagnostic settings
Diagnostic settings allow you to enable metrics and tracing for MQTT broker.
To override default diagnostic settings for MQTT broker, update the `spec.diagnostics` section in the *Broker* resource. Adjust the log level to control the amount and detail of information that is logged. The log level can be set for different components of MQTT broker. The default log level is `info`.
-You can configure diagnostics using the *Broker* custom resource definition (CRD). The following table shows the properties of the broker diagnostic settings and all default values.
-
-| Name | Format | Default | Description |
-| | - | - | |
-| logs.exportIntervalSeconds | integer | 30 | How often to export the logs to the open telemetry collector |
-| logs.exportLogLevel | string | error | The level of logs to export |
-| logs.level | string | info | The log level. For example, `debug`, `info`, `warn`, `error`, `trace` |
-| logs.openTelemetryCollectorAddress | string | | The open telemetry collector endpoint where to export |
-| metrics.exportIntervalSeconds | integer | 30 | How often to export the metrics to the open telemetry collector |
-| metrics.mode | MetricsEnabled | Enabled | The toggle to enable/disable metrics. |
-| metrics.openTelemetryCollectorAddress| string | | The open telemetry collector endpoint where to export |
-| metrics.prometheusPort | integer | 9600 | The prometheus port to expose the metrics |
-| metrics.stalenessTimeSeconds | integer | 600 | The time used to determine if a metric is stale and drop from the metrics cache |
-| metrics.updateIntervalSeconds | integer | 30 | How often to refresh the metrics |
-| selfcheck.intervalSeconds | integer | 30 | The self check interval |
-| selfcheck.mode | SelfCheckMode | Enabled | The toggle to enable/disable self check |
-| selfcheck.timeoutSeconds | integer | 15 | The timeout for self check |
-| traces.cacheSizeMegabytes | integer | 16 | The cache size in megabytes |
-| traces.exportIntervalSeconds | integer | 30 | How often to export the metrics to the open telemetry collector |
-| traces.mode | TracesMode | Enabled | The toggle to enable/disable traces |
-| traces.openTelemetryCollectorAddress | string | | The open telemetry collector endpoint where to export |
-| traces.selfTracing | SelfTracing | | The self tracing properties |
-| traces.spanChannelCapacity | integer | 1000 | The span channel capacity |
+You can configure diagnostics using the *Broker* custom resource definition (CRD). For more information on the diagnostics settings, see the [Broker](/rest/api/iotoperationsmq/broker) API reference.
Here's an example of a *Broker* custom resource with metrics and tracing enabled and self-check disabled:
metadata:
name: broker namespace: azure-iot-operations spec:
- mode: auto
diagnostics: logs:
- exportIntervalSeconds: 220
- exportLogLevel: nym
- level: debug
- openTelemetryCollectorAddress: acfqqatmodusdbzgomgcrtulvjy
+ level: "debug"
+ opentelemetryExportConfig:
+ otlpGrpcEndpoint: "endpoint"
metrics:
- stalenessTimeSeconds: 463
- mode: Enabled
- exportIntervalSeconds: 246
- openTelemetryCollectorAddress: vyasdzsemxfckcorfbfx
- prometheusPort: 60607
- updateIntervalSeconds: 15
+ opentelemetryExportConfig:
+ otlpGrpcEndpoint: "endpoint"
+ intervalSeconds: 60
selfCheck: mode: Enabled
- intervalSeconds: 106
- timeoutSeconds: 70
+ intervalSeconds: 120
+ timeoutSeconds: 60
traces:
- cacheSizeMegabytes: 97
+ cacheSizeMegabytes: 32
mode: Enabled
- exportIntervalSeconds: 114
- openTelemetryCollectorAddress: oyujxiemzlqlcsdamytj
+ opentelemetryExportConfig:
+ otlpGrpcEndpoint: "endpoint"
selfTracing: mode: Enabled
- intervalSeconds: 179
- spanChannelCapacity: 47152
+ intervalSeconds: 120
+ spanChannelCapacity: 1000
``` ## Configure encryption of internal traffic
The value of the *ephemeralVolumeClaimSpec* property is used as the ephemeral.*v
For example, to use an ephemeral volume with a capacity of 1 gigabyte, specify the following parameters in your Broker CRD: ```yaml
-diskBackedMessageBufferSettings:
+diskBackedMessageBuffer:
maxSize: "1G"- ephemeralVolumeClaimSpec: storageClassName: "foo" accessModes:
The value of the *persistentVolumeClaimSpec* property is used as the *volumeClai
For example, to use a *persistent* volume with a capacity of 1 gigabyte, specify the following parameters in your Broker CRD: ```yaml
-diskBackedMessageBufferSettings:
+diskBackedMessageBuffer:
maxSize: "1G"- persistentVolumeClaimSpec: storageClassName: "foo" accessModes:
Only use *emptyDir* volume when using a cluster with filesystem quotas. For more
For example, to use an emptyDir volume with a capacity of 1 gigabyte, specify the following parameters in your Broker CRD: ```yaml
- diskBackedMessageBufferSettings:
+ diskBackedMessageBuffer:
maxSize: "1G" ```
iot-operations Howto Configure Brokerlistener https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-brokerlistener.md
- ignite-2023 Previously updated : 08/03/2024 Last updated : 08/29/2024 #CustomerIntent: As an operator, I want understand options to secure MQTT communications for my IoT Operations solution.
Last updated 08/03/2024
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-To customize the network access and security use the **BrokerListener** resource. A listener corresponds to a network endpoint that exposes the broker to the network. You can have one or more BrokerListener resources for each *Broker* resource, and thus multiple ports with different access control each.
+To customize the network access and security use the *BrokerListener* resource. A listener corresponds to a network endpoint that exposes the broker to the network. You can have one or more *BrokerListener* resources for each *Broker* resource, and thus multiple ports with different access control each.
-Each listener can have its own authentication and authorization rules that define who can connect to the listener and what actions they can perform on the broker. You can use *BrokerAuthentication* and *BrokerAuthorization* resources to specify the access control policies for each listener. This flexibility allows you to fine-tune the permissions and roles of your MQTT clients, based on their needs and use cases.
+Each listener port can have its own authentication and authorization rules that define who can connect to the listener and what actions they can perform on the broker. You can use *BrokerAuthentication* and *BrokerAuthorization* resources to specify the access control policies for each listener. This flexibility allows you to fine-tune the permissions and roles of your MQTT clients, based on their needs and use cases.
+
+> [!TIP]
+> You can only access the default MQTT broker deployment by using the cluster IP, TLS, and a service account token. Clients connecting from outside the cluster need extra configuration before they can connect.
Listeners have the following characteristics: - You can have up to three listeners. One listener per service type of `loadBalancer`, `clusterIp`, or `nodePort`. The default *BrokerListener* named *listener* is service type `clusterIp`. - Each listener supports multiple ports-- AuthN and authZ references are per port
+- BrokerAuthentication and BrokerAuthorization references are per port
- TLS configuration is per port - Service names must be unique - Ports cannot conflict over different listeners
-The *BrokerListener* resource has these fields:
-
-| Field Name | Required | Description |
-||-|-|
-| brokerRef | Yes | The name of the broker resource that this listener belongs to. This field is required and must match an existing *Broker* resource in the same namespace. |
-| ports[] | Yes | The listener can listen on multiple ports. List of ports that the listener accepts client connections. |
-| ports.authenticationRef | No | Reference to client authentication settings. Omit to disable authentication. To learn more about authentication, see [Configure MQTT broker authentication](howto-configure-authentication.md). |
-| ports.authorizationRef | No | Reference to client authorization settings. Omit to disable authorization. |
-| ports.nodePort | No | Kubernetes node port. Only relevant when this port is associated with a NodePort listener. |
-| ports.port | Yes | TCP port for accepting client connections. |
-| ports.protocol | No | Protocol to use for client connections. Values: `Mqtt`, `Websockets`. Default: `Mqtt` |
-| ports.tls | No | TLS server certificate settings for this port. Omit to disable TLS. |
-| ports.tls.automatic | No | Automatic TLS server certificate management with cert-manager. [Configure TLS with automatic certificate management](howto-configure-tls-auto.md)|
-| ports.tls.automatic.duration | No | Lifetime of certificate. Must be specified using a *Go* time format (h\|m\|s). For example, 240h for 240 hours and 45m for 45 minutes. |
-| ports.tls.automatic.issuerRef | No | cert-manager issuer reference. |
-| ports.tls.automatic.issuerRef.group | No | cert-manager issuer group. |
-| ports.tls.automatic.issuerRef.kind | No | cert-manager issuer kind. Values: `Issuer`, `ClusterIssuer`. |
-| ports.tls.automatic.issuerRef.name | No | cert-manager issuer name. |
-| ports.tls.automatic.privateKey | No | Type of certificate private key. |
-| ports.tls.automatic.privateKey.algorithm | No | Algorithm for the private key. Values: `Ec256`, `Ec384`, `ec521`, `Ed25519`, `Rsa2048`, `Rsa4096`, `Rsa8192`. |
-| ports.tls.automatic.privateKey.rotationPolicy | No | Size of the private key. Values: `Always`, `Never`. |
-| ports.tls.automatic.renewBefore | No | When to begin certificate renewal. Must be specified using a *Go* time format (h\|m\|s). For example, 240h for 240 hours and 45m for 45 minutes. |
-| ports.tls.automatic.san | No | Additional Subject Alternative Names (SANs) to include in the certificate. |
-| ports.tls.automatic.san.dns | No | DNS SANs. |
-| ports.tls.automatic.san.ip | No | IP address SANs. |
-| ports.tls.automatic.secretName | No | Secret for storing server certificate. Any existing data will be overwritten. This is a reference to the secret through an identifying name, not the secret itself. |
-| ports.tls.automatic.secretNamespace | No | Certificate Kubernetes namespace. Omit to use current namespace. |
-| ports.tls.manual | No | Manual TLS server certificate management through a defined secret. For more information, see [Configure TLS with manual certificate management](howto-configure-tls-manual.md).|
-| ports.tls.manual.secretName | Yes | Kubernetes secret containing an X.509 client certificate. This is a reference to the secret through an identifying name, not the secret itself. |
-| ports.tls.manual.secretNamespace | No | Certificate K8S namespace. Omit to use current namespace. |
-| serviceName | No | The name of Kubernetes service created for this listener. Kubernetes creates DNS records for this `serviceName` that clients should use to connect to MQTT broker. This subfield is optional and defaults to `aio-mq-dmqtt-frontend`. |
-| serviceType | No | The type of the Kubernetes service created for this listener. This subfield is optional and defaults to `clusterIp`. Must be either `loadBalancer`, `clusterIp`, or `nodePort`. |
+For a list of the available settings, see the [Broker Listener](/rest/api/iotoperationsmq/broker-listener) API reference.
## Default BrokerListener
-When you deploy Azure IoT Operations Preview, the deployment also creates a *BrokerListener* resource named `listener` in the `azure-iot-operations` namespace. This listener is linked to the default Broker resource named `broker` that's also created during deployment. The default listener exposes the broker on port 8883 with TLS and SAT authentication enabled. The TLS certificate is [automatically managed](howto-configure-tls-auto.md) by cert-manager. Authorization is disabled by default.
+When you deploy Azure IoT Operations Preview, the deployment also creates a *BrokerListener* resource named `listener` in the `azure-iot-operations` namespace. This listener is linked to the default Broker resource named `broker` that's also created during deployment. The default listener exposes the broker on port 18883 with TLS and SAT authentication enabled. The TLS certificate is [automatically managed](howto-configure-tls-auto.md) by cert-manager. Authorization is disabled by default.
To inspect the listener, run:
metadata:
namespace: azure-iot-operations spec: brokerRef: broker
- serviceName: aio-mq-dmqtt-frontend
+ serviceName: aio-broker
serviceType: ClusterIp ports:
- - authenticationRef: authn
- port: 8883
+ - port: 18883
+ authenticationRef: authn
protocol: Mqtt tls:
- automatic:
+ certManagerCertificateSpec:
issuerRef:
- apiGroup: cert-manager.io
+ group: cert-manager.io
kind: Issuer name: mq-dmqtt-frontend mode: Automatic
To learn more about the default BrokerAuthentication resource linked to this lis
### Update the default BrokerListener
-The default BrokerListener uses the service type *ClusterIp*. You can have only one listener per service type. If you want to add more ports to service type *ClusterIp*, you can update the default listener to add more ports. For example, you could add a new port 1883 with no TLS and authentication off with the following kubectl patch command:
+The default *BrokerListener* uses the service type *ClusterIp*. You can have only one listener per service type. If you want to add more ports to service type *ClusterIp*, you can update the default listener to add more ports. For example, you could add a new port 1883 with no TLS and authentication off with the following kubectl patch command:
```bash kubectl patch brokerlistener listener -n azure-iot-operations --type='json' -p='[{"op": "add", "path": "/spec/ports/", "value": {"port": 1883, "protocol": "Mqtt"}}]'
metadata:
namespace: azure-iot-operations spec: brokerRef: broker
- serviceType: loadBalancer
+ serviceType: LoadBalancer
serviceName: my-new-listener ports: - port: 1883
spec:
authenticationRef: authn protocol: Mqtt tls:
- automatic:
+ mode: Automatic
+ certManagerCertificateSpec:
issuerRef: name: e2e-cert-issuer kind: Issuer group: cert-manager.io
- mode: Automatic
``` ## Related content
iot-operations Howto Configure Tls Auto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-tls-auto.md
- ignite-2023 Previously updated : 08/13/2024 Last updated : 08/22/2024 #CustomerIntent: As an operator, I want to configure MQTT broker to use TLS so that I have secure communication between the MQTT broker and client.
With automatic certificate management, you use cert-manager to manage the TLS se
The cert-manager Issuer resource defines how certificates are automatically issued. Cert-manager [supports several Issuers types natively](https://cert-manager.io/docs/configuration/). It also supports an [external](https://cert-manager.io/docs/configuration/external/) issuer type for extending functionality beyond the natively supported issuers. MQTT broker can be used with any type of cert-manager issuer. > [!IMPORTANT]
-> During initial deployment, Azure IoT Operations is installed with a default Issuer for TLS server certificates. You can use this issuer for development and testing. For more information, see [Default root CA and issuer with Azure IoT Operations](#default-root-ca-and-issuer-with-azure-iot-operations-preview). The steps below are only required if you want to use a different issuer.
+> During initial deployment, Azure IoT Operations is installed with a default Issuer for TLS server certificates. You can use this issuer for development and testing. For more information, see [Default root CA and issuer with Azure IoT Operations](#default-root-ca-and-issuer). The steps below are only required if you want to use a different issuer.
The approach to create the issuer is different depending on your scenario. The following sections list examples to help you get started.
metadata:
spec: brokerRef: broker serviceType: loadBalancer
- serviceName: my-new-tls-listener # Avoid conflicts with default service name 'aio-mq-dmqtt-frontend'
- port: 8884 # Avoid conflicts with default port 8883
- tls:
- automatic:
- issuerRef:
- name: my-issuer
- kind: Issuer
+ serviceName: my-new-tls-listener # Avoid conflicts with default service name 'aio-broker'
+ ports:
+ - port: 8884 # Avoid conflicts with default port 18883
+ tls:
+ mode: Automatic
+ certManagerCertificateSpec:
+ issuerRef:
+ name: my-issuer
+ kind: Issuer
``` Once the BrokerListener resource is configured, MQTT broker automatically creates a new service with the specified port and TLS enabled.
san:
dns: - iotmq.example.com # To connect to the broker from a different namespace, add the following DNS name:
- - aio-mq-dmqtt-frontend.azure-iot-operations.svc.cluster.local
+ - aio-broker.azure-iot-operations.svc.cluster.local
ip: - 192.168.1.1 ```
Replace `$HOST` with the appropriate host:
Remember to specify authentication methods if needed.
-## Default root CA and issuer with Azure IoT Operations Preview
-
-To help you get started, Azure IoT Operations is deployed with a default "quickstart" root CA and issuer for TLS server certificates. You can use this issuer for development and testing.
-
-* The CA certificate is self-signed and not trusted by any clients outside of Azure IoT Operations. The subject of the CA certificate is `CN = Azure IoT Operations Quickstart Root CA - Not for Production` and it expires in 30 days from the time of installation.
-
-* The root CA certificate is stored in a Kubernetes secret called `aio-ca-key-pair-test-only`.
-
-* The public portion of the root CA certificate is stored in a ConfigMap called `aio-ca-trust-bundle-test-only`. You can retrieve the CA certificate from the ConfigMap and inspect it with kubectl and openssl.
-
- ```bash
- kubectl get configmap aio-ca-trust-bundle-test-only -n azure-iot-operations -o json | jq -r '.data["ca.crt"]' | openssl x509 -text -noout
- ```
-
- ```Output
- Certificate:
- Data:
- Version: 3 (0x2)
- Serial Number:
- <SERIAL-NUMBER>
- Signature Algorithm: ecdsa-with-SHA256
- Issuer: CN = Azure IoT Operations Quickstart Root CA - Not for Production
- Validity
- Not Before: Nov 2 00:34:31 2023 GMT
- Not After : Dec 2 00:34:31 2023 GMT
- Subject: CN = Azure IoT Operations Quickstart Root CA - Not for Production
- Subject Public Key Info:
- Public Key Algorithm: id-ecPublicKey
- Public-Key: (256 bit)
- pub:
- <PUBLIC-KEY>
- ASN1 OID: prime256v1
- NIST CURVE: P-256
- X509v3 extensions:
- X509v3 Basic Constraints: critical
- CA:TRUE
- X509v3 Key Usage:
- Certificate Sign
- X509v3 Subject Key Identifier:
- <SUBJECT-KEY-IDENTIFIER>
- Signature Algorithm: ecdsa-with-SHA256
- [SIGNATURE]
- ```
+## Default root CA and issuer
-* By default, there's already a CA issuer configured in the `azure-iot-operations` namespace called `aio-ca-issuer`. It's used as the common CA issuer for all TLS server certificates for IoT Operations. MQTT broker uses an issuer created from the same CA certificate to issue TLS server certificates for the default TLS listener on port 8883. You can inspect the issuer with the following command:
-
- ```bash
- kubectl get issuer aio-ca-issuer -n azure-iot-operations -o yaml
- ```
-
- ```Output
- apiVersion: cert-manager.io/v1
- kind: Issuer
- metadata:
- annotations:
- meta.helm.sh/release-name: azure-iot-operations
- meta.helm.sh/release-namespace: azure-iot-operations
- creationTimestamp: "2023-11-01T23:10:24Z"
- generation: 1
- labels:
- app.kubernetes.io/managed-by: Helm
- name: aio-ca-issuer
- namespace: azure-iot-operations
- resourceVersion: "2036"
- uid: <UID>
- spec:
- ca:
- secretName: aio-ca-key-pair-test-only
- status:
- conditions:
- - lastTransitionTime: "2023-11-01T23:10:59Z"
- message: Signing CA verified
- observedGeneration: 1
- reason: KeyPairVerified
- status: "True"
- type: Ready
- ```
+To help you get started, Azure IoT Operations is deployed with a default "quickstart" root CA and issuer for TLS server certificates. You can use this issuer for development and testing. For more information, see [Default root CA and issuer for TLS server certificates](./concept-default-root-ca.md).
For production, you must configure a CA issuer with a certificate from a trusted CA, as described in the previous sections.
iot-operations Howto Configure Tls Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-tls-manual.md
spec:
serviceType: loadBalancer # Optional, defaults to clusterIP serviceName: mqtts-endpoint # Match the SAN in the server certificate ports:
- port: 8885 # Avoid port conflict with default listener at 8883
+ - port: 8885 # Avoid port conflict with default listener at 18883
tls:
- manual:
mode: Manual
- secretName: server-cert-secret
+ manual:
+ secretRef: server-cert-secret
``` Once the BrokerListener resource is created, the operator automatically creates a Kubernetes service and deploys the listener. You can check the status of the service by running `kubectl get svc`.
iot-operations Howto Test Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-test-connection.md
This article shows different ways to test connectivity to MQTT broker with MQTT
By default, MQTT broker: -- Deploys a [TLS-enabled listener](howto-configure-brokerlistener.md) on port 8883 with *ClusterIp* as the service type. *ClusterIp* means that the broker is accessible only from within the Kubernetes cluster. To access the broker from outside the cluster, you must configure a service of type *LoadBalancer* or *NodePort*.
+- Deploys a [TLS-enabled listener](howto-configure-brokerlistener.md) on port 18883 with *ClusterIp* as the service type. *ClusterIp* means that the broker is accessible only from within the Kubernetes cluster. To access the broker from outside the cluster, you must configure a service of type *LoadBalancer* or *NodePort*.
- Accepts [Kubernetes service accounts for authentication](howto-configure-authentication.md) for connections from within the cluster. To connect from outside the cluster, you must configure a different authentication method.
The first option is to connect from within the cluster. This option uses the def
metadata: name: mqtt-client # Namespace must match MQTT broker BrokerListener's namespace
- # Otherwise use the long hostname: aio-mq-dmqtt-frontend.azure-iot-operations.svc.cluster.local
+ # Otherwise use the long hostname: aio-broker.azure-iot-operations.svc.cluster.local
namespace: azure-iot-operations spec: # Use the "mqtt-client" service account which comes with default deployment
The first option is to connect from within the cluster. This option uses the def
sources: - serviceAccountToken: path: mq-sat
- audience: aio-mq # Must match audience in BrokerAuthentication
+ audience: aio-internal # Must match audience in BrokerAuthentication
expirationSeconds: 86400 - name: trust-bundle configMap:
The first option is to connect from within the cluster. This option uses the def
1. Inside the pod's shell, run the following command to publish a message to the broker: ```console
- mosquitto_pub --host aio-mq-dmqtt-frontend --port 8883 --message "hello" --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+ mosquitto_pub --host aio-broker --port 18883 --message "hello" --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
``` The output should look similar to the following:
The first option is to connect from within the cluster. This option uses the def
1. To subscribe to the topic, run the following command: ```console
- mosquitto_sub --host aio-mq-dmqtt-frontend --port 8883 --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
+ mosquitto_sub --host aio-broker --port 18883 --topic "world" --debug --cafile /var/run/certs/ca.crt -D CONNECT authentication-method 'K8S-SAT' -D CONNECT authentication-data $(cat /var/run/secrets/tokens/mq-sat)
``` The output should look similar to the following:
kubectl get configmap aio-ca-trust-bundle-test-only -n azure-iot-operations -o j
Use the downloaded `ca.crt` file to configure your client to trust the broker's TLS certificate chain.
-If you are connecting to the broker from a different namespace, you must use the full service hostname `aio-mq-dmqtt-frontend.azure-iot-operations.svc.cluster.local`. You must also add the DNS name to the server certificate by including a subject alternative name (SAN) DNS field to the *BrokerListener* resource. For more information, see [Configure server certificate parameters](howto-configure-tls-auto.md#optional-configure-server-certificate-parameters).
+If you are connecting to the broker from a different namespace, you must use the full service hostname `aio-broker.azure-iot-operations.svc.cluster.local`. You must also add the DNS name to the server certificate by including a subject alternative name (SAN) DNS field to the *BrokerListener* resource. For more information, see [Configure server certificate parameters](howto-configure-tls-auto.md#optional-configure-server-certificate-parameters).
### Authenticate with the broker
kubectl patch brokerlistener listener -n azure-iot-operations --type='json' -p='
Some Kubernetes distributions can [expose](https://k3d.io/v5.1.0/usage/exposing_services/) MQTT broker to a port on the host system (localhost). You should use this approach because it makes it easier for clients on the same host to access MQTT broker.
-For example, to create a K3d cluster with mapping the MQTT broker's default MQTT port 8883 to localhost:8883:
+For example, to create a K3d cluster with mapping the MQTT broker's default MQTT port 18883 to localhost:18883:
```bash
-k3d cluster create --port '8883:8883@loadbalancer'
+k3d cluster create --port '18883:18883@loadbalancer'
``` But for this method to work with MQTT broker, you must configure it to use a load balancer instead of cluster IP. There are two ways to do this: create a load balancer or patch the existing default BrokerListener resource service type to load balancer.
But for this method to work with MQTT broker, you must configure it to use a loa
type: LoadBalancer ports: - name: mqtt1
- port: 8883
- targetPort: 8883
+ port: 18883
+ targetPort: 18883
selector: app: broker app.kubernetes.io/instance: broker
But for this method to work with MQTT broker, you must configure it to use a loa
1. Wait for the service to be updated. ```console
- kubectl get service aio-mq-dmqtt-frontend --namespace azure-iot-operations
+ kubectl get service aio-broker --namespace azure-iot-operations
``` Output should look similar to the following: ```Output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- aio-mq-dmqtt-frontend LoadBalancer 10.43.107.11 XXX.XX.X.X 8883:30366/TCP 14h
+ aio-broker LoadBalancer 10.43.107.11 XXX.XX.X.X 18883:30366/TCP 14h
``` 1. You can use the external IP address to connect to MQTT broker over the internet. Make sure to use the external IP address instead of `localhost`.
But for this method to work with MQTT broker, you must configure it to use a loa
> mosquitto_pub --qos 1 --debug -h localhost --message hello --topic world --username client1 --pw password --cafile ca.crt --insecure > ``` >
-> In this example, the mosquitto client uses username and password to authenticate with the broker along with the root CA cert to verify the broker's TLS certificate chain. Here, the `--insecure` flag is required because the default TLS certificate issued to the load balancer is only valid for the load balancer's default service name (aio-mq-dmqtt-frontend) and assigned IPs, not localhost.
+> In this example, the mosquitto client uses username and password to authenticate with the broker along with the root CA cert to verify the broker's TLS certificate chain. Here, the `--insecure` flag is required because the default TLS certificate issued to the load balancer is only valid for the load balancer's default service name (aio-broker) and assigned IPs, not localhost.
> > Never expose MQTT broker port to the internet without authentication and TLS. Doing so is dangerous and can lead to unauthorized access to your IoT devices and bring unsolicited traffic to your cluster. >
But for this method to work with MQTT broker, you must configure it to use a loa
With [minikube](https://minikube.sigs.k8s.io/docs/), [kind](https://kind.sigs.k8s.io/), and other cluster emulation systems, an external IP might not be automatically assigned. For example, it might show as *Pending* state.
-1. To access the broker, forward the broker listening port 8883 to the host.
+1. To access the broker, forward the broker listening port 18883 to the host.
```bash
- kubectl port-forward --namespace azure-iot-operations service/aio-mq-dmqtt-frontend 8883:mqtts-8883
+ kubectl port-forward --namespace azure-iot-operations service/aio-broker 18883:mqtts-18883
```
-1. Use 127.0.0.1 to connect to the broker at port 8883 with the same authentication and TLS configuration as the example without port forwarding.
+1. Use 127.0.0.1 to connect to the broker at port 18883 with the same authentication and TLS configuration as the example without port forwarding.
Port forwarding is also useful for testing MQTT broker locally on your development machine without having to modify the broker's configuration. For more information about minikube, see [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) #### Port forwarding on AKS Edge Essentials For Azure Kubernetes Services Edge Essentials, you need to perform a few additional steps. For more information about port forwarding, see [Expose Kubernetes services to external devices](/azure/aks/hybrid/aks-edge-howto-expose-service).
-1. Assume that the broker's service is exposed to an external IP using a load balancer. For example if you patched the default load balancer `aio-mq-dmqtt-frontend`, get the external IP address for the service.
+1. Assume that the broker's service is exposed to an external IP using a load balancer. For example if you patched the default load balancer `aio-broker`, get the external IP address for the service.
```bash
- kubectl get service aio-mq-dmqtt-frontend --namespace azure-iot-operations
+ kubectl get service aio-broker --namespace azure-iot-operations
``` Output should look similar to the following: ```Output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- aio-mq-dmqtt-frontend LoadBalancer 10.43.107.11 192.168.0.4 8883:30366/TCP 14h
+ aio-broker LoadBalancer 10.43.107.11 192.168.0.4 18883:30366/TCP 14h
```
-1. Set up port forwarding to the `aio-mq-dmqtt-frontend` service on the external IP address `192.168.0.4` and port `8883`:
+1. Set up port forwarding to the `aio-broker` service on the external IP address `192.168.0.4` and port `18883`:
```bash
- netsh interface portproxy add v4tov4 listenport=8883 connectport=8883 connectaddress=192.168.0.4
+ netsh interface portproxy add v4tov4 listenport=18883 connectport=18883 connectaddress=192.168.0.4
``` 1. Open the port on the firewall to allow traffic to the broker's service: ```bash
- New-NetFirewallRule -DisplayName "AIO MQTT Broker" -Direction Inbound -Protocol TCP -LocalPort 8883 -Action Allow
+ New-NetFirewallRule -DisplayName "AIO MQTT Broker" -Direction Inbound -Protocol TCP -LocalPort 18883 -Action Allow
``` 1. Use the host's public IP address to connect to the MQTT broker.
iot-operations Overview Iot Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/overview-iot-mq.md
Title: Publish and subscribe MQTT messages using MQTT broker
-description: Use MQTT broker to publish and subscribe to messages. Destinations include other MQTT brokers, Azure IoT Data Processor, and Azure cloud services.
+description: Use MQTT broker to publish and subscribe to messages. Destinations include other MQTT brokers, dataflows, and Azure cloud services.
MQTT broker focuses on the unique edge-native, data-plane value it can provide t
MQTT broker builds on top of battle-tested Azure and Kubernetes-native security and identity concepts making it both highly secure and usable. It supports multiple authentication mechanisms for flexibility along with granular access control mechanisms all the way down to individual MQTT topic level.
+> [!TIP]
+> You can only access the default MQTT broker deployment by using the cluster IP, TLS, and a service account token. Clients connecting from outside the cluster need extra configuration before they can connect.
## Azure Arc integration
iot-operations Overview Iot Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/overview-iot-operations.md
There are two core elements in the Azure IoT Operations Preview architecture:
* **Azure IoT Operations Preview**. The set of data services that run on Azure Arc-enabled edge Kubernetes clusters. It includes the following * The _MQTT broker_ is an edge-native MQTT broker that powers event-driven architectures. * The _connector for OPC UA_ handles the complexities of OPC UA communication with OPC UA servers and other leaf devices.
-* The _operations experience_ is a web UI that provides a unified experience for operational technologists to manage assets and data processor pipelines in an Azure IoT Operations deployment. An IT administrator can use [Azure Arc site manager (preview)](/azure/azure-arc/site-manager/overview) to group Azure IoT Operations instances by physical location and make it easier for OT users to find instances.
+* The _operations experience_ is a web UI that provides a unified experience for operational technologists to manage assets and dataflows in an Azure IoT Operations deployment. An IT administrator can use [Azure Arc site manager (preview)](/azure/azure-arc/site-manager/overview) to group Azure IoT Operations instances by physical location and make it easier for OT users to find instances.
## Deploy Azure IoT Operations runs on Arc-enabled Kubernetes clusters on the edge. You can deploy Azure IoT Operations by using the Azure portal or the Azure CLI. > [!NOTE]
-> During public preview, there's no support for upgrading an existing Azure IoT Operations deployment to a newer version. Instead, remove Azure IoT Operations from your cluster and then deploy the latest version. For more information, see [Update Azure IoT Operations](deploy-iot-ops/howto-deploy-iot-operations.md#update-azure-iot-operations).
+> During public preview, there's no support for upgrading an existing Azure IoT Operations deployment to a newer version. Instead, remove Azure IoT Operations from your cluster and then deploy the latest version. For more information, see [Update Azure IoT Operations](./deploy-iot-ops/howto-manage-update-uninstall.md#update).
## Manage devices and assets Azure IoT Operations can connect to various industrial devices and assets. You can use the [operations experience](discover-manage-assets/howto-manage-assets-remotely.md?tabs=portal) or the [Azure CLI](discover-manage-assets/howto-manage-assets-remotely.md?tabs=cli) to manage the devices and assets that you want to connect to.
-The [connector for OPC UA](discover-manage-assets/overview-opcua-broker.md) manages the connection to OPC UA servers and other leaf devices. The connector for OPC UA publishes data from the OPC UA servers and the devices discovered by _Akri services_ to MQTT broker topics.
+The [connector for OPC UA](discover-manage-assets/overview-opcua-broker.md) manages the connection to OPC UA servers and other leaf devices. The connector for OPC UA publishes data from the OPC UA servers to MQTT broker topics.
-The [Akri services](discover-manage-assets/overview-akri.md) help you discover and connect to other types of devices and assets.
+## Automatic asset discovery
+
+Automatic asset discovery using Akri services is not available in the current version of Azure IoT Operations. To learn more, see the [Release notes](https://github.com/Azure/azure-iot-operations/releases) for the current version.
+
+> [!NOTE]
+> Some Akri services are still deployed as part of the current Azure IoT Operations release, but they don't support any user configurable scenarios.
+
+If you're using a previous version of Azure IoT Operations, you can find the Akri documentation on the [previous versions site](/previous-versions/azure/iot-operations/discover-manage-assets/overview-akri).
## Publish and subscribe with MQTT
The [MQTT broker](manage-mqtt-broker/overview-iot-mq.md) runs on the edge. It le
Examples of how components in Azure IoT Operations use the MQTT broker include: * The connector for OPC UA publishes data from OPC UA servers and other leaf devices to MQTT topics.
-* Data processor pipelines subscribe to MQTT topics to retrieve messages for processing.
+* Dataflows subscribe to MQTT topics to retrieve messages for processing.
* Northbound cloud connectors subscribe to MQTT topics to fetch messages for forwarding to cloud services. ## Connect to the cloud
The northbound cloud connectors let you connect the MQTT broker directly to clou
## Process data
-In Azure IoT operations v0.6.0, the data processor is replaced by [dataflows](./connect-to-cloud/overview-dataflow.md). Dataflows provide enhanced data transformation and data contextualization capabilities within Azure IoT Operations.
+In Azure IoT operations v0.6.0, the data processor was replaced by [dataflows](./connect-to-cloud/overview-dataflow.md). Dataflows provide enhanced data transformation and data contextualization capabilities within Azure IoT Operations. Dataflows can use schemas stored in the schema registry to deserialize and serialize messages.
> [!NOTE]
-> If you want to continue using the data processor, you must deploy Azure IoT Operations v0.5.1 with the additional flag to include data processor component. It's not possible to deploy the data processor with Azure IoT Operations v0.6.0. The Azure IoT operations CLI extension that includes the flag for deploying the data processor is version 0.5.1b1. This version requires Azure CLI v2.46.0 or greater. The data processor documentation is currently available on the previous versions site: [Azure IoT Operations data processor](/previous-versions/azure/iot-operations/process-data/overview-data-processor).
-
-<!-- TODO: Fix the previous versions link before we publish -->
+> If you want to continue using the data processor, you must deploy Azure IoT Operations v0.5.1 with the additional flag to include data processor component. It's not possible to deploy the data processor with Azure IoT Operations v0.6.0 or newer. The Azure IoT operations CLI extension that includes the flag for deploying the data processor is version 0.5.1b1. This version requires Azure CLI v2.46.0 or greater. The data processor documentation is currently available on the previous versions site: [Azure IoT Operations data processor](/previous-versions/azure/iot-operations/process-data/overview-data-processor).
## Visualize and analyze telemetry
To secure communication between devices and the cloud through isolated network e
## Supported regions
-In the 0.6.x public preview release, Azure IoT Operations supports clusters that are Arc-enabled in the following regions:
-
-* East US
-* East US 2
-* West US
-* West US 2
-* West Europe
-* North Europe
-
->[!NOTE]
->West US 3 was supported in previous versions of Azure IoT Operations, but isn't supported in version 0.6.x.
+In the 0.7.x public preview release, Azure IoT Operations supports clusters that are Arc-enabled in the following regions:
+
+| Region | CLI value |
+|--|-|
+| East US | eastus |
+| East US 2 | eastus2 |
+| West US | westus |
+| West US 2 | westus2 |
+| West US 3 | westus3 |
+| West Europe | westeurope |
+| North Europe | northeurope |
This list of supported regions only applies to the region that you use when connecting your cluster to Azure Arc. This list doesn't restrict you from using your preferred Azure region for your cloud resources. Azure IoT Operations components and other resources deployed to your cluster in these supported regions can still connect to cloud resources in different regions.
iot-operations Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/reference/glossary.md
A unified data plane for the edge. It's a collection of modular, scalable, and h
On first mention in an article, use _Azure IoT Operations Preview - enabled by Azure Arc_. On subsequent mentions, you can use _Azure IoT Operations_. Never use an acronym.
-### Akri services
+### Dataflows
-This component helps you discover and connect to devices and assets.
-
-### Data processor
-
-This component lets you aggregate, enrich, normalize, and filter the data from your devices and assets. The data processor is a pipeline-based data processing engine that lets you process data at the edge before you send it to the other services either at the edge or in the cloud
+This component lets you aggregate, enrich, normalize, and filter the data from your devices and assets. Dataflows is a data processing engine that lets you process data at the edge before you send it to the other services either at the edge or in the cloud.
### Azure IoT Layered Network Management Preview
An MQTT broker that runs on the edge. The component lets you publish and subscri
### Connector for OPC UA
-This component manages the connection to OPC UA servers and other leaf devices. The connector for OPC UA publishes data from the OPC UA servers and the devices discovered by _Akri services_ to MQTT broker topics.
+This component manages the connection to OPC UA servers and other leaf devices. The connector for OPC UA publishes data from the OPC UA servers to MQTT broker topics.
### Operations experience
-This web UI provides a unified experience for operational technologists to manage assets and data processor pipelines in an Azure IoT Operations deployment.
+This web UI provides a unified experience for operational technologists to manage assets and dataflows in an Azure IoT Operations deployment.
### Azure Device Registry Preview
On first mention in an article, use _Azure Device Registry Preview_. On subseque
## Related content - [What is Azure IoT Operations Preview?](../overview-iot-operations.md)-- [Connect industrial assets using Azure IoT OPC UA Broker Preview](../discover-manage-assets/overview-opcua-broker.md)
+- [Connect industrial assets using the connector for OPC UA](../discover-manage-assets/overview-opcua-broker.md)
- [Publish and subscribe MQTT messages using MQTT broker](../manage-mqtt-broker/overview-iot-mq.md)
iot-operations Observability Metrics Akri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/reference/observability-metrics-akri.md
- Title: Metrics for Akri services
-description: Available observability metrics for Akri services to monitor the health and performance of your solution.
----
- - ignite-2023
Previously updated : 11/1/2023-
-# CustomerIntent: As an IT admin or operator, I want to be able to monitor and visualize data
-# on the health of my industrial assets and edge environment.
--
-# Metrics for Akri services
--
-Akri services provide a set of observability metrics that you can use to monitor and analyze the health of your solution. This article lists the available metrics, and describes the meaning and usage details of each metric.
-
-## Available metrics
-
-| Metric name | Definition |
-| -- | - |
-| instance_count | The number of OPC UA assets that Akri services detects and adds as a custom resource to the cluster at the edge. |
-| discovery_response_result | The success or failure of every discovery request that the Agent sends to the Discovery Handler.|
-| discovery_response_time | The time in seconds from the point when Akri services apply the configuration, until the Agent makes the first discovery request.|
--
-## Related content
--- [Configure observability](../configure-observability-monitoring/howto-configure-observability.md)
iot-operations Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/known-issues.md
Title: "Known issues: Azure IoT Operations Preview"
-description: Known issues for the MQTT broker, Layered Network Management, connector for OPC UA, OPC PLC simulator, data processor, and operations experience web UI.
+description: Known issues for the MQTT broker, Layered Network Management, connector for OPC UA, OPC PLC simulator, dataflows, and operations experience web UI.
- ignite-2023 Previously updated : 07/11/2024 Last updated : 09/19/2024 # Known issues: Azure IoT Operations Preview
This article lists the known issues for Azure IoT Operations Preview.
## Deploy and uninstall issues -- You must use the Azure CLI interactive login `az login` when you deploy Azure IoT Operations. If you don't, you might see an error such as _ERROR: AADSTS530003: Your device is required to be managed to access this resource_.
+- If you prefer to have no updates made to your cluster without giving explicit consent, you should disable Arc updates when you enable the cluster. This is due to the fact that some system extensions are automatically updated by the Arc agent.
-- When you use the `az iot ops delete` command to uninstall Azure IoT Operations, some custom Akri resources might not be deleted from the cluster. These Akri instances can cause issues if you redeploy Azure IoT Operations to the same cluster. You should manually delete any Akri instance custom resources from the cluster before you redeploy Azure IoT Operations.
+- Using your own cert-manager issuer is only supported for cert-manager versions less than 1.13.
+
+- The Azure storage account that you use for the schema registry must have public network access enabled.
- If your deployment fails with the `"code":"LinkedAuthorizationFailed"` error, it means that you don't have **Microsoft.Authorization/roleAssignments/write** permissions on the resource group that contains your cluster.
This article lists the known issues for Azure IoT Operations Preview.
- If deploying with an Azure Resource Manager template, set the `deployResourceSyncRules` parameter to `false`. - If deploying with the Azure CLI, include the `--disable-rsync-rules` flag with the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command. -- Uninstalling K3s: When you uninstall k3s on Ubuntu by using the `/usr/local/bin/k3s-uninstall.sh` script, you might encounter an issue where the script gets stuck on unmounting the NFS pod. A workaround for this issue is to run the following command before you run the uninstall script: `sudo systemctl stop k3s`.
+- Directly editing **SecretProviderClass** and **SecretSync** custom resources in your Kubernetes cluster can the break secrets flow in Azure IoT Operations. For any operations related to secrets, use the operations experience UI.
-## MQTT broker
+- By default, the `az iot ops check` command displays warning about missing dataflows until you create a dataflow.
-- You can only access the default deployment by using the cluster IP, TLS, and a service account token. Clients outside the cluster need extra configuration before they can connect.
+## MQTT broker
- You can't update the Broker custom resource after the initial deployment. You can't make configuration changes to cardinality, memory profile, or disk buffer. As a workaround, when deploying Azure IoT Operations with the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command, you can include the `--broker-config-file` parameter with a JSON configuration file for the MQTT broker. For more information, see [Advanced MQTT broker config](https://github.com/Azure/azure-iot-ops-cli-extension/wiki/Advanced-Mqtt-Broker-Config) and [Configure core MQTT broker settings](../manage-mqtt-broker/howto-configure-availability-scale.md). -- You can't configure the size of a disk-backed buffer unless your chosen storage class supports it.- - Even though the MQTT broker's [diagnostics](../manage-mqtt-broker/howto-configure-availability-scale.md#configure-mqtt-broker-diagnostic-settings) produces telemetry on its own topic, you might still get messages from the self-test when you subscribe to `#` topic. -- Some clusters that have slow Kubernetes API calls may result in selftest ping failures: `Status {Failed}. Probe failed: Ping: 1/2` from running `az iot ops check` command.--- You might encounter an error in the KafkaConnector StatefulSet event logs such as `Invalid value: "mq-to-eventhub-connector-<token>--connectionstring": must be no more than 63 characters`. Ensure your KafkaConnector name is of maximum 5 characters.--- You may encounter timeout errors in the Kafka connector and Event Grid connector logs. Despite this, the connector will continue to function and forward messages.- - Deployment might fail if the **cardinality** and **memory profile** values are set to be too large for the cluster. To resolve this issue, set the replicas count to `1` and use a smaller memory profile, like `low`. ## Azure IoT Layered Network Management Preview
This article lists the known issues for Azure IoT Operations Preview.
## Connector for OPC UA -- All `AssetEndpointProfiles` in the cluster must be configured with the same transport authentication certificate, otherwise the connector for OPC UA might exhibit random behavior. To avoid this issue when using transport authentication, configure all asset endpoints with the same thumbprint for the transport authentication certificate in the Azure IoT Operations (preview) portal.--- If you deploy an `AssetEndpointProfile` into the cluster and the connector for OPC UA can't connect to the configured endpoint on the first attempt, then the connector for OPC UA never retries to connect.
+- Azure Device Registry asset definitions let you use numbers in the attribute section while OPC supervisor expects only strings.
- As a workaround, first fix the connection problem. Then either restart all the pods in the cluster with pod names that start with "aio-opc-opc.tcp", or delete the `AssetEndpointProfile` and deploy it again.
--- If you create an asset by using the operations experience web UI, the subject property for any messages sent by the asset is set to the `externalAssetId` value. In this case, the `subject` is a GUID rather than a friendly asset name.--- If your broker tries to connect to an untrusted server, it throws a `rejected to write to PKI` error. You can also encounter this error in assets and asset endpoint profiles.-
- As a workaround, add the server's certificate to the trusted certificates store as described in [Configure the trusted certificates list](../discover-manage-assets/howto-configure-opcua-certificates-infrastructure.md#configure-the-trusted-certificates-list).
-
- Or, you can [Optionally configure your AssetEndpointProfile without mutual trust established](../discover-manage-assets/howto-configure-opc-plc-simulator.md#optionally-configure-your-assetendpointprofile-without-mutual-trust-established). This workaround should not be used in production environments.
+- When you add a new asset with a new asset endpoint profile to the OPC UA broker and trigger a reconfiguration, the deployment of the `opc.tcp` pods changes to accommodate the new secret mounts for username and password. If the new mount fails for some reason, the pod does not restart and therefore the old flow for the correctly configured assets stops as well.
## OPC PLC simulator
kubectl patch AssetEndpointProfile $ENDPOINT_NAME \
``` > [!CAUTION]
-> Don't use this configuration in production or pre-production environments. Exposing your cluster to the internet without proper authentication might lead to unauthorized access and even DDOS attacks.
+> Don't use this configuration in production or preproduction environments. Exposing your cluster to the internet without proper authentication might lead to unauthorized access and even DDOS attacks.
You can patch all your asset endpoints with the following command:
kubectl delete pod aio-opc-opc.tcp-1-f95d76c54-w9v9c -n azure-iot-operations
## Dataflows -- Sending data to ADX, ADLSv2, and Fabric OneLake are not available in Azure IoT Operations version 0.6.x. Support for these endpoints will be added back in an upcoming preview release.--- By default, dataflows don't send MQTT message user properties to Kafka destinations. These user properties include values such as `subject` that store the name of the asset sending the message. To include user properties in the Kafka message, update the `DataflowEndpoint` configuration to include: `copyMqttProperties: enabled`. For example:-
- ```yaml
- apiVersion: connectivity.iotoperations.azure.com/v1beta1
- kind: DataflowEndpoint
- metadata:
- name: kafka-target
- namespace: azure-iot-operations
- spec:
- endpointType: kafkaSettings
- kafkaSettings:
- host: "<NAMESPACE>.servicebus.windows.net:9093"
- batching:
- latencyMs: 0
- maxMessages: 100
- tls:
- mode: Enabled
- copyMqttProperties: enabled
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- audience: https://<NAMESPACE>.servicebus.windows.net
- ```
--- Currently, you can't track a value by using the last known value flag, `?$last`, in your dataflows configuration. Until a bug fix is in place, the workaround to is to deploy Azure IoT Operations version 0.5.1 and use data processor.--- Dataflows profile scaling iwth `instanceCount` is limited to `1` for Azure IoT Operations version 0.6.x.--- Configuration using Azure Resource Manager isn't supported. Instead, configure dataflows using `kubectl` and YAML files as documented.-
-## Akri services
-
-When Akri services generate an asset endpoint for a discovered asset, the configuration may contain an invalid setting that prevents the asset from connecting to the MQTT broker. To resolve this issue, edit the `AssetEndpointProfile` configuration and remove the `"securityMode":"none"` setting from thr `additionalConfiguration` property. For example, the configuration for the `opc-ua-broker-opcplc-000000-50000` asset endpoint generated in the quickstarts should look like the following example:
-
-```yml
-apiVersion: deviceregistry.microsoft.com/v1beta1
-kind: AssetEndpointProfile
-metadata:
- creationTimestamp: "2024-08-05T11:41:21Z"
- generation: 2
- name: opc-ua-broker-opcplc-000000-50000
- namespace: azure-iot-operations
- resourceVersion: "233018"
- uid: f9cf479f-7a77-49b5-af88-18d509e9cdb0
-spec:
- additionalConfiguration: '{"applicationName":"opc-ua-broker-opcplc-000000-50000","keepAliveMilliseconds":10000,"defaults":{"publishingIntervalMilliseconds":1000,"queueSize":1,"samplingIntervalMilliseconds":1000},"subscription":{"maxItems":1000,"lifetimeMilliseconds":360000},"security":{"autoAcceptUntrustedServerCertificates":true,"securityPolicy":"http://opcfoundation.org/UA/SecurityPolicy#None"},"session":{"timeoutMilliseconds":60000,"keepAliveIntervalMilliseconds":5000,"reconnectPeriodMilliseconds":500,"reconnectExponentialBackOffMilliseconds":10000}}'
- targetAddress: "\topc.tcp://opcplc-000000:50000"
- transportAuthentication:
- ownCertificates: []
- userAuthentication:
- mode: Anonymous
- uuid: opc-ua-broker-opcplc-000000-50000
-```
-
-A sporadic issue might cause the `aio-opc-asset-discovery` pod to restart with the following error in the logs: `opcua@311 exception="System.IO.IOException: Failed to bind to address http://unix:/var/lib/akri/opcua-asset.sock: address already in use.`.
-
-To work around this issue, use the following steps to update the **DaemonSet** specification:
-
-1. Locate the **target** custom resource provided by `orchestration.iotoperations.azure.com` with a name that ends with `-ops-init-target`:
-
- ```console
- kubectl get targets -n azure-iot-operations
- ```
-
-1. Edit the target configuration and find the `spec.components.aio-opc-asset-discovery.properties.resource.spec.template.spec.containers.env` parameter. For example:
-
- ```console
- kubectl edit target solid-zebra-97r6jr7rw43vqv-ops-init-target -n azure-iot-operations
- ```
-
-1. Add the following environment variables to the `spec.components.aio-opc-asset-discovery.properties.resource.spec.template.spec.containers.env` configuration section:
-
- ```yml
- - name: ASPNETCORE_URLS
- value: http://+8443
- - name: POD_IP
- valueFrom:
- fieldRef:
- fieldPath: "status.podIP"
- ```
-
-1. Save your changes. The final specification looks like the following example:
-
- ```yml
- apiVersion: orchestrator.iotoperations.azure.com/v1
- kind: Target
- metadata:
- name: <cluster-name>-target
- namespace: azure-iot-operations
- spec:
- displayName: <cluster-name>-target
- scope: azure-iot-operations
- topologies:
- ...
- version: 1.0.0.0
- components:
- ...
- - name: aio-opc-asset-discovery
- type: yaml.k8s
- properties:
- resource:
- apiVersion: apps/v1
- kind: DaemonSet
- metadata:
- labels:
- app.kubernetes.io/part-of: aio
- name: aio-opc-asset-discovery
- spec:
- selector:
- matchLabels:
- name: aio-opc-asset-discovery
- template:
- metadata:
- labels:
- app.kubernetes.io/part-of: aio
- name: aio-opc-asset-discovery
- spec:
- containers:
- - env:
- - name: ASPNETCORE_URLS
- value: http://+8443
- - name: POD_IP
- valueFrom:
- fieldRef:
- fieldPath: status.podIP
- - name: DISCOVERY_HANDLERS_DIRECTORY
- value: /var/lib/akri
- - name: AKRI_AGENT_REGISTRATION
- value: 'true'
- image: >-
- edgeappmodel.azurecr.io/opcuabroker/discovery-handler:0.4.0-preview.3
- imagePullPolicy: Always
- name: aio-opc-asset-discovery
- ports: ...
- resources: ...
- volumeMounts: ...
- volumes: ...
- ```
-
-## Operations experience web UI
+- You can't use anonymous authentication for MQTT and Kafka endpoints when you deploy dataflow endpoints from the operations experience UI. The current workaround is to use a YAML configuration file and apply it by using `kubectl`.
-To sign in to the operations experience, you need a Microsoft Entra ID account with at least contributor permissions for the resource group that contains your **Kubernetes - Azure Arc** instance. You can't sign in with a Microsoft account (MSA). To create an account in your Azure tenant:
+- Changing the instance count in a dataflow profile on an active dataflow might result in new messages being discarded or in messages being duplicated on the destination.
-1. Sign in to the [Azure portal](https://portal.azure.com/) with the same tenant and user name that you used to deploy Azure IoT Operations.
-1. In the Azure portal, go to the **Microsoft Entra ID** section, select **Users > +New user > Create new user**. Create a new user and make a note of the password, you need it to sign in later.
-1. In the Azure portal, go to the resource group that contains your **Kubernetes - Azure Arc** instance. On the **Access control (IAM)** page, select **+Add > Add role assignment**.
-1. On the **Add role assignment page**, select **Privileged administrator roles**. Then select **Contributor** and then select **Next**.
-1. On the **Members** page, add your new user to the role.
-1. Select **Review and assign** to complete setting up the new user.
+- When you create a dataflow, if you set the `dataSources` field as an empty list, the dataflow crashes. The current workaround is to always enter at least one value in the data sources.
-You can now use the new user account to sign in to the [Azure IoT Operations](https://iotoperations.azure.com) portal.
+- Dataflow custom resources created in your cluster aren't visible in the operations experience UI. This is expected because synchronizing dataflow resources from the edge to the cloud isn't currently supported.
iot-operations Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/troubleshoot.md
This article contains troubleshooting tips for Azure IoT Operations Preview.
For general deployment and configuration troubleshooting, you can use the Azure CLI IoT Operations *check* and *support* commands.
-[Azure CLI version 2.46.0 or higher](/cli/azure/install-azure-cli) is required and the [Azure IoT Operations extension](/cli/azure/iot/ops) installed.
+[Azure CLI version 2.52.0 or higher](/cli/azure/install-azure-cli) is required and the [Azure IoT Operations extension](/cli/azure/iot/ops) installed.
- Use [az iot ops check](/cli/azure/iot/ops#az-iot-ops-check) to evaluate Azure IoT Operations service deployment for health, configuration, and usability. The *check* command can help you find problems in your deployment and configuration. - Use [az iot ops support create-bundle](/cli/azure/iot/ops/support#az-iot-ops-support-create-bundle) to collect logs and traces to help you diagnose problems. The *support create-bundle* command creates a standard support bundle zip archive you can review or provide to Microsoft Support.
-## Data processor pipeline deployment troubleshooting
-
-If your data processor pipeline deployment status is showing as **Failed**, use the following commands to find the pipeline error codes.
-
-To list the data processor pipeline deployments, run the following command:
-
-```bash
-kubectl get pipelines -A
-```
-
-The output from the pervious command looks like the following example:
-
-```text
-NAMESPACE NAME AGE
-azure-iot-operations passthrough-data-pipeline 2d20h
-azure-iot-operations reference-data-pipeline 2d20h
-azure-iot-operations contextualized-data-pipeline 2d20h
-```
-
-To view detailed information for a pipeline, run the following command:
-
-```bash
-kubectl describe pipelines passthrough-data-pipeline -n azure-iot-operations
-```
-
-The output from the previous command looks like the following example:
-
-```text
-...
-Status:
- Provisioning Status:
- Error
- Code: <ErrorCode>
- Message: <ErrorMessage>
- Status: Failed
-Events: <none>
-```
-
-If you see the following message when you try to access the **Pipelines** tab in the Azure IoT Operations (preview) portal:
-
-_Data Processor not found in the current deployment. Please re-deploy with the additional argument to include the data processor._
-
-You need to deploy Azure IoT Operations with the optional data processor component included. To do this, you need to add the `--include-dp` argument when you run the [az iot ops init](/cli/azure/iot/ops#az-iot-ops-init) command. You must use the `--include-dp` argument to include the data processor component when you first deploy Azure IoT Operations. You can't add this optional component to an existing deployment.
-
-> [!TIP]
-> If you want to delete the Azure IoT Operations deployment but plan on reinstalling it on your cluster, use the [az iot ops delete](/cli/azure/iot/ops?az-iot-ops-delete) command.
- ## Azure IoT Layered Network Management Preview troubleshooting The troubleshooting guidance in this section is specific to Azure IoT Operations when using the Layered Network Management component. For more information, see [How does Azure IoT Operations Preview work in layered network?](../manage-layered-network/concept-iot-operations-in-layered-network.md).
The troubleshooting guidance in this section is specific to Azure IoT Operations
If the Layered Network Management operator install fails or you can't apply the custom resource for a Layered Network Management instance:
-1. Verify the regions are supported for public preview. Public preview supports eight regions. For more information, see [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md#connect-a-kubernetes-cluster-to-azure-arc).
+1. Verify the regions are supported for public preview. Public preview supports eight regions. For more information, see [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md#connect-a-kubernetes-cluster-to-azure-arc).
1. If there are any other errors in installing Layered Network Management Arc extensions, follow the guidance included with the error. Try uninstalling and installing the extension. 1. Verify the Layered Network Management operator is in the *Running and Ready* state. 1. If applying the custom resource `kubectl apply -f cr.yaml` fails, the output of this command lists the reason for error. For example, CRD version mismatch or wrong entry in CRD.
load-balancer Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/inbound-nat-rules.md
There are two types of inbound NAT rule available for Azure Load Balancer, versi
Inbound NAT rule V1 is defined for a single target virtual machine. Inbound NAT pools are feature of Inbound NAT rules V1 and automatically creates Inbound NAT rules per VMSS intance. The load balancer's frontend IP address and the selected frontend port are used for connections to the virtual machine.
+>[!Important]
+> On September 30, 2027, Inbound NAT rules v1 will be retired. If you are currently using Inbound NAT rules v1, make sure to upgrade to Inbound NAT rules v2 prior to the retirement date.
++ :::image type="content" source="./media/inbound-nat-rules/inbound-nat-rule.png" alt-text="Diagram of a single virtual machine inbound NAT rule."::: ### Inbound NAT rule V2
load-balancer Tutorial Nat Rule Multi Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-nat-rule-multi-instance-portal.md
Title: "Tutorial: Create a multiple virtual machine inbound NAT rule - Azure portal"
+ Title: "Tutorial: Create Inbound NAT rule V2 - Azure portal"
description: In this tutorial, learn how to configure port forwarding using Azure Load Balancer to create a connection to multiple virtual machines in an Azure virtual network.
Last updated 09/30/2024
-# Tutorial: Create a multiple virtual machine inbound NAT rule using the Azure portal
+# Tutorial: Create inbound NAT rule V2 using the Azure portal
Inbound NAT rules allow you to connect to virtual machines (VMs) in an Azure virtual network by using an Azure Load Balancer public IP address and port number.
the virtual machines and load balancer with the following steps:
Advance to the next article to learn how to create a cross-region load balancer: > [!div class="nextstepaction"]
-> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md)
+> [Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md)
migrate How To Scale Out For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-scale-out-for-migration.md
ms. Previously updated : 06/20/2024 Last updated : 10/03/2024
This article helps you understand how to use a scale-out appliance to migrate a
Using the agentless migration method for VMware virtual machines you can: -- Replicate up to 300 VMs from a single vCenter server concurrently using one Azure Migrate appliance.-- Replicate up to 500 VMs from a single vCenter server concurrently by deploying a second scale-out appliance for migration.
+- Schedule replication for up to 300 VMs from a single vCenter server concurrently using one Azure Migrate appliance.
+- Schedule replication for up to 500 VMs from a single vCenter server concurrently by deploying a second scale-out appliance for migration.
-In this article, you will learn how to:
+In this article, you learn how to:
- Add a scale-out appliance for agentless migration of VMware virtual machines - Migrate up to 500 VMs concurrently using the scale-out appliance.
+> [!NOTE]
+> While you can schedule replication for up to 300 VMs on a single appliance and up to 500 VMs using a scale-out appliance, the replication itself is limited by each appliance's capacity to replicate only 56 disks at a time. This means that although the VMs are scheduled concurrently, they will be replicated sequentially based on the appliance's available capacity. All scheduled VMs will eventually be replicated by the same appliance, but not all will start replication immediately.
++ ## Prerequisites Before you get started, you need to perform the following steps:
To learn how to perform the above, review the tutorial on [migrating VMware virt
To add a scale-out appliance, follow the steps mentioned below:
-1. Click on **Discover** > **Are your machines virtualized?**
+1. Select **Discover** > **Are your machines virtualized?**
1. Select **Yes, with VMware vSphere Hypervisor.** 1. Select agentless replication in the next step. 1. Select **Scale-out an existing primary appliance** in the select the type of appliance menu.
To add a scale-out appliance, follow the steps mentioned below:
### 1. Generate the Azure Migrate project key 1. In **Generate Azure Migrate project key**, provide a suffix name for the scale-out appliance. The suffix can contain only alphanumeric characters and has a length limit of 14 characters.
-2. Click **Generate key** to start the creation of the required Azure resources. Do not close the Discover page during the creation of resources.
-3. Copy the generated key. You will need the key later to complete the registration of the scale-out appliance.
+2. Select **Generate key** to start the creation of the required Azure resources. Don't close the Discover page during the creation of resources.
+3. Copy the generated key. You'll need the key later to complete the registration of the scale-out appliance.
### 2. Download the installer for the scale-out appliance
-In **Download Azure Migrate appliance**, click **Download**. You need to download the PowerShell installer script to deploy the scale-out appliance on an existing server running Windows Server 2019 or Windows Server 2022 and with the required hardware configuration (32-GB RAM, 8 vCPUs, around 80 GB of disk storage and internet access, either directly or through a proxy).
+In **Download Azure Migrate appliance**, select **Download**. You need to download the PowerShell installer script to deploy the scale-out appliance on an existing server running Windows Server 2019 or Windows Server 2022 and with the required hardware configuration (32-GB RAM, 8 vCPUs, around 80 GB of disk storage and internet access, either directly or through a proxy).
:::image type="content" source="./media/how-to-scale-out-for-migration/download-scale-out.png" alt-text="Download script for scale-out appliance":::
In **Download Azure Migrate appliance**, click **Download**. You need to downlo
`PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 `
-5. Select from the scenario, cloud, configuration and connectivity options to deploy the desired appliance. For instance, the selection shown below sets up a **scale-out appliance** to initiate concurrent replications on servers running in your VMware environment to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
+5. Select from the scenario, cloud, configuration, and connectivity options to deploy the desired appliance. For instance, the selection shown below sets up a **scale-out appliance** to initiate concurrent replications on servers running in your VMware environment to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
:::image type="content" source="./media/how-to-scale-out-for-migration/script-vmware-scaleout-inline.png" alt-text="Screenshot that shows how to set up scale-out appliance." lightbox="./media/how-to-scale-out-for-migration/script-vmware-scaleout-expanded.png":::
In the configuration manager, select **Set up prerequisites**, and then complete
> This is a new user experience in Azure Migrate appliance which is available only if you have set up an appliance using the latest OVA/Installer script downloaded from the portal. The appliances which have already been registered will continue seeing the older version of the user experience and will continue to work without any issues. 1. For the appliance to run auto-update, paste the project key that you copied from the portal. If you don't have the key, go to **Azure Migrate: Discovery and assessment** > **Overview** > **Manage existing appliances**. Select the appliance name you provided when you generated the project key, and then copy the key that's shown.
- 2. The appliance will verify the key and start the auto-update service, which updates all the services on the appliance to their latest versions. When the auto-update has run, you can select **View appliance services** to see the status and versions of the services running on the appliance server.
+ 2. The appliance verifies the key and start the auto-update service, which updates all the services on the appliance to their latest versions. When the auto-update has run, you can select **View appliance services** to see the status and versions of the services running on the appliance server.
3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure Login prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt.
- :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and log in.":::
+ :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and sign in.":::
4. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported. > [!NOTE] > If you close the login tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
- 5. After you successfully log in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to log in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
+ 5. After you successfully sign in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to sign in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
After the appliance is successfully registered, to see the registration details, select **View details**. #### Import appliance configuration from primary appliance
-To complete the registration of the scale-out appliance, click **import** to get the necessary configuration files from the primary appliance.
+To complete the registration of the scale-out appliance, select **import** to get the necessary configuration files from the primary appliance.
1. Clicking **import** opens a pop-up window with instructions on how to import the necessary configuration files from the primary appliance. :::image type="content" source="./media/how-to-scale-out-for-migration/import-modal-scale-out.png" alt-text="Screenshot of the Import Configuration files modal.":::
-1. Login (remote desktop) to the primary appliance and execute the following PowerShell commands:
+1. Sign in (remote desktop) to the primary appliance and execute the following PowerShell commands:
`PS cd 'C:\Program Files\Microsoft Azure Appliance Configuration Manager\Scripts\PowerShell' `
To complete the registration of the scale-out appliance, click **import** to get
1. Copy the zip file created by running the above commands to the scale-out appliance. The zip file contains configuration files needed to register the scale-out appliance.
-1. In the pop-up window opened in the previous step, select the location of the copied configuration zip file and click **Save**.
+1. In the pop-up window opened in the previous step, select the location of the copied configuration zip file and select **Save**.
- Once the files have been successfully imported, the registration of the scale-out appliance will complete and it will show you the timestamp of the last successful import. You can also see the registration details by clicking **View details**.
+ Once the files are successfully imported, the registration of the scale-out appliance completes and it shows you the timestamp of the last successful import. You can also see the registration details by clicking **View details**.
1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7, 7, or 8(depending on the compatibility of VDDK and ESXi versions) from VMware. Extract the downloaded zip file contents to the specified location on the appliance, as indicated in the *Installation instructions*. The Migration and modernization tool uses the VDDK to replicate servers during migration to Azure. You can *rerun prerequisites* at any time during appliance configuration to check whether the appliance meets all the prerequisites.
-At this point, you should revalidate that the scale-out appliance is able to connect to your vCenter server. Click **revalidate** to validate vCenter Server connectivity from scale-out appliance.
+At this point, you should revalidate that the scale-out appliance is able to connect to your vCenter server. Select **revalidate** to validate vCenter Server connectivity from scale-out appliance.
:::image type="content" source="./media/how-to-scale-out-for-migration/view-sources.png" alt-text="Screenshot shows view credentials and discovery sources to be validated."::: > [!IMPORTANT]
At this point, you should revalidate that the scale-out appliance is able to con
With the scale-out appliance in place, you can now replicate 500 VMs concurrently. You can also migrate VMs in batches of 200 through the Azure portal.
-The Migration and modernization tool will take care of distributing the virtual machines between the primary and scale-out appliance for replication. Once the replication is done, you can migrate the virtual machines.
+The Migration and modernization tool takes care of distributing the virtual machines between the primary and scale-out appliance for replication. Once the replication is done, you can migrate the virtual machines.
> [!TIP] > We recommend migrating virtual machines in batches of 200 for optimal performance if you want to migrate a large number of virtual machines.
modeling-simulation-workbench Troubleshoot Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/troubleshoot-known-issues.md
VENDOR snpslmd /path/to/snpslmd 27021
## Users on public IP connector with IP on allowlist can't access workbench desktop or data pipeline
-A chamber with a public IP connector configured to allow users who's IP is listed after the first entry of the allowlist can't access the chamber either through the desktop or data pipeline using AzCopy. If the allowlist on a public IP connector contains overlapping networks, in some instances the preprocessor might fail to detect the overlapping networks before attempting to commit them to the active NSG. Failures aren't reported back to the user. Other NSG rules elsewhere - either before or after the interfering rule - might not be processed, defaulting to the "deny all" rule. Access to the connector might be blocked unexpectedly for users that previously had access and appear elsewhere in the list. Access is blocked for all connector interactions including desktop, data pipeline upload, and data pipeline download. The connector still responds to port queries, but doesn't allow interactions from an IP or IP range shown in the connector networking allowlist.
+A chamber with a public IP connector configured to allow users who's IP is listed after the first entry of the allowlist can't be accessed either through the desktop or data pipeline using AzCopy. If the allowlist on a public IP connector contains overlapping networks, in some instances the preprocessor might fail to detect the overlapping networks before attempting to commit them to the active NSG. Failures aren't reported back to the user. Other NSG rules elsewhere - either before or after the interfering rule - might not be processed, defaulting to the "deny all" rule. Access to the connector might be blocked unexpectedly for users that previously had access and appear elsewhere in the list. Access is blocked for all connector interactions including desktop, data pipeline upload, and data pipeline download. The connector still responds to port queries, but doesn't allow interactions from an IP or IP range shown in the connector networking allowlist.
### Prerequisites
is known, describe the steps to take to correct the issue.
In an H3 section, describe potential causes. >
+-->
nat-gateway Troubleshoot Nat And Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/troubleshoot-nat-and-azure-services.md
NAT gateway can be deployed with AKS clusters in order to allow for explicit out
Learn more at [Managed NAT Gateway](/azure/aks/nat-gateway).
+### Connecting from AKS cluster to the AKS API server over the internet
+
+To manage an AKS cluster, you interact with its API server. When you create a non-private cluster that resolves to the API server's fully qualified domain name (FQDN), the API server is assigned a public IP address by default. After you attach a NAT Gateway to the subnets of your AKS cluster, NAT Gateway will be used to connect to the public IP of the AKS API server. See the following documentation for additional information and design guidance:
+* [Access an AKS API server](/azure/architecture/guide/security/access-azure-kubernetes-service-cluster-api-server)
+* [Use API Management with AKS](/azure/api-management/api-management-kubernetes)
+* [Define API server authorized IP ranges](/azure/aks/api-server-authorized-ip-ranges)
+ ### Can't update my NAT gateway IPs or idle timeout timer for an AKS cluster Public IP addresses and the idle timeout timer for NAT gateway can be updated with the `az aks update` command for a managed NAT gateway ONLY.
oracle Oracle Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/oracle-database-troubleshoot.md
We recommend the removal of all Microsoft locks to Oracle Database@Azure resour
In this section, you'll find information about networking and how it can affect Oracle Database@Azure. ### IP address requirement differences between Oracle Database@Azure and Exadata in OCI
-IP address requirements are different between Oracle Database@Azure and in OCI. In the [Requirements for IP Address Space](https://docs.oracle.com/iaas/exadatacloud/doc/ecs-network-setup.html#ECSCM-GUID-D5C577A1-BC11-470F-8A91-77609BBEF1EA) documentation for Exadata in , the following differences with the requirements of Oracle Database@Azure must be considered:
+IP address requirements are different between Oracle Database@Azure and Exadata in OCI. In the [Requirements for IP Address Space](https://docs.oracle.com/iaas/exadatacloud/doc/ecs-network-setup.html#ECSCM-GUID-D5C577A1-BC11-470F-8A91-77609BBEF1EA) documentation for Exadata in OCI, the following differences with the requirements of Oracle Database@Azure must be considered:
- Oracle Database@Azure only supports Exadata X9M. All other shapes are unsupported.
oracle Provision Manage Oracle Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/provision-manage-oracle-resources.md
The following management functions are available for all resources from the Micr
1. Follow the steps to access the resource blade. 1. You can remove a single or multiple resources from the blade by selecting the checkbox on the left side of the table. Once you have selected the resource(s) to remove, you can then select the **Delete** icon at the top of the blade. 1. You can also remove a single resource by selecting the link to the resource from the **Name** field in the table. From the resource's detail page, select the **Delete** icon at the top of the blade.
-* **Move the resource to a new resource group.**
- 1. Follow the steps to access the resource blade.
- 1. Select the link to the resource from the **Name** field in the table.
- 1. From the resource's overview page, select the **Move** link on the Resource group field.
- 1. From the **Move resources** page, use the drop-down field for **Resource group** to select an existing resource group.
- 1. To create and use a new resource group, select the **Create new** link below the Resource group field. Enter a new resource group name in the **Name** field. Select the **OK** button to save your new resource group and use it. Select the **Cancel** button to return without creating a new resource group.
-* **Move the resource to a new subscription.**
-
- > [!NOTE]
- > You must have access to another Microsoft Azure subscription, and that subscription must have been setup for access to OracleDB@Azure. If both of these conditions are not met, you will not be able to successfully move the resource to another subscription.
-
- 1. Follow the steps to access the resource blade.
- 1. Select the link to the resource from the Name field in the table.
- 1. From the resource's overview page, select the **Move** link on the Subscription field.
- 1. From the **Move resources** page, use the drop-down field for **Subscription** to select an existing subscription.
- 1. You can also simultaneously move the resource group for the resource. To do this, note the steps in the **Move the resource to a new resource group** tasks. **Add, manage, or delete tags for the resource.**
- 1. Follow the steps to access the resource blade.
- 1. Select the link to the resource from the Name field in the table.
- 1. From the resource's overview page, select the **Edit** link on the **Tags** field.
- 1. To create a new tag, enter values in the **Name** and **Value** fields.
- 1. To edit an existing tag, change the value in the existing tag's **Value** field.
- 1. To delete an existing tag, select the **Trashcan** icon at the right-side of the tag.
## Manage resource allocation for Oracle Autonomous Database Serverless instances
Follow the following steps to access the Oracle Autonomous Database@Azure blade.
1. Select the link to the resource from the **Name** field in the table. 1. From the resource's detail page, select the **Go to OCI** link on the **OCI Database URL** field. 1. Log in to OCI.
- 1. Manage the resource from within the OCI console.
+ 1. Manage the resource from within the OCI console.
orbital About Ground Stations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/about-ground-stations.md
Title: Azure Orbital Ground Stations - About Microsoft and partner ground stations
-description: Provides specs on Microsoft ground stations and outlines partner ground station network.
-
+ Title: About Microsoft and partner ground stations
+description: Offers specifications and definitions for Microsoft ground stations and partner ground station network.
Last updated 10/20/2023-++ #Customer intent: As a satellite operator or user, I want to learn about Microsoft and partner ground stations.
orbital Concepts Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact-profile.md
Last updated 07/13/2022-+ #Customer intent: As a satellite operator or user, I want to understand how to use the contact profile so that I can take passes using Azure Orbital Ground Station.
orbital Concepts Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact.md
Title: Contact resource - Azure Orbital Ground Station
-description: Learn more about a contact resource and how to schedule a contact.
+description: Learn more about a contacting your spacecraft and resource and how to schedule and execute a contact on a ground station.
Last updated 07/13/2022-+ #Customer intent: As a satellite operator or user, I want to understand how to what the contact resource is so I can manage my mission operations.
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/contact-profile.md
Title: Azure Orbital Ground Station - Configure a contact profile
-description: Learn how to configure a contact profile
+description: Learn how to configure a contact profile with Azure Orbital Ground Station to save and reuse contact configurations.
Last updated 12/06/2022-+ # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
Title: Azure Orbital Ground Station - Downlink data from public satellites
+ Title: Downlink data from public satellites
description: Learn how to schedule a contact with public satellites by using the Azure Orbital Ground Station service. Last updated 07/12/2022-+ # Customer intent: As a satellite operator, I want to ingest data from NASA's public satellites into Azure.
orbital Geospatial Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/geospatial-reference-architecture.md
Title: Geospatial reference architecture - Azure Orbital
-description: Show how to architect end-to-end geospatial data on Azure.
+description: A high-level approach to using cloud-native capabilities, open-source and commercial software options to architect end-to-end geospatial data on Azure.
Last updated 06/13/2022-+ #Customer intent: As a geospatial architect, I'd like to understand how to architecture a solution on Azure. # End-to-end geospatial storage, analysis, and visualization
orbital Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/get-started.md
Title: Azure Orbital Ground Station - Get started
-description: How to get started with Azure Orbital Ground Station.
+description: How to get started with Azure Orbital Ground Station, used to communicate with a private satellite or a selection of public satellites.
Last updated 8/4/2023-+ # Customer intent: As a satellite operator, I want to learn how to get started with Azure Orbital Ground Station.
orbital Initiate Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/initiate-licensing.md
Title: Azure Orbital Ground Station - Initiate ground station licensing
-description: How to initiate ground station licensing
+ Title: Initiate ground station licensing
+description: How to initiate ground station licensing. Satellites and ground stations require authorizations from federal regulators and other government agencies.
Last updated 10/12/2023-+ #Customer intent: As a satellite operator or user, I want to learn about ground station licensing.
orbital Mission Phases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/mission-phases.md
Title: Azure Orbital Ground Station - Mission Phases
-description: Points users to relevant resources depending on the phase of their mission.
+description: Points users to relevant resources, like secure access to communication products and services, depending on the phase of their mission.
Last updated 10/12/2023-+ #Customer intent: As a satellite operator or user, I want to know how to use AOGS at each phase in my satellite mission.
orbital Modem Chain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/modem-chain.md
Title: Configure the RF chain - Azure Orbital
-description: Learn more about how to configure modems.
+description: Learn more about how to configure modems, either managed modems or virtual RF functionality using the Azure Orbital Ground Station service.
Last updated 08/30/2022-+ #Customer intent: As a satellite operator or user, I want to understand how to use software modems to establish RF connections with my satellite.
orbital Organize Stac Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/organize-stac-data.md
Title: Organize spaceborne geospatial data with STAC - Azure Orbital Analytics
+ Title: Organize spaceborne geospatial data with STAC
description: Create an implementation of SpatioTemporal Asset Catalog (STAC) creation to structure geospatial data. Last updated 09/29/2022-+ # Organize spaceborne geospatial data with SpatioTemporal Asset Catalog (STAC)
Here are a couple of examples:
## Architecture Download a [Visio file](https://download.microsoft.com/download/5/6/4/564196b7-dd01-468a-af21-1da16489f298/stac_arch.vsdx) for this architecture. ### Dataflow Download a [Visio file](https://download.microsoft.com/download/5/6/4/564196b7-dd01-468a-af21-1da16489f298/stac_data_flow.vsdx) for this dataflow.
At a high level, this deployment does the following:
- Deploys Azure API Management service and publishes the endpoint for STAC FastAPI. - Packages the code and its dependencies, builds the Docker container images, and pushes them to Azure Container Registry.
- :::image type="content" source="media/stac-deploy.png" alt-text="Diagram of STAC deployment services." lightbox="media/stac-deploy.png":::
+ :::image type="content" source="media/stac-deploy.png" alt-text="Diagram of a sample STAC deployment services." lightbox="media/stac-deploy.png":::
Download a [Visio file](https://download.microsoft.com/download/5/6/4/564196b7-dd01-468a-af21-1da16489f298/stac_deploy.vsdx) for this implementation.
orbital Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview.md
Title: Azure Orbital Ground Station - Overview
-description: Azure Orbital Ground Station is a cloud-based ground station as a service that allows you to streamline your operations by ingesting space data directly into Azure.
+description: Azure Orbital Ground Station is a cloud-based ground station as a service. Use the service to streamline operations by ingesting space data directly into Azure.
Last updated 12/06/2022-+ # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
orbital Prepare For Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-for-launch.md
Title: Azure Orbital Ground Station - Prepare for launch and early operations
-description: Learn how to get ready for Launch with Azure Orbital.
+ Title: Prepare for launch and early operations
+description: Steps to prepare for an upcoming satellite launch and acquire your satellite with Azure Orbital Ground Station.
Last updated 10/12/2023-+
+# Customer intent: As a satellite operator, I want to be prepared to operate my account.
# Prepare for launch and early operations
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
Title: Prepare network to send and receive data - Azure Orbital
-description: Learn how to deliver and receive data from Azure Orbital.
+description: Learn how to deliver and receive data from Azure Orbital. Ensure your subnet and Azure Orbital Ground Station resources are configured correctly
Last updated 07/12/2022-+ # Prepare the network for Azure Orbital Ground Station integration
orbital Receive Real Time Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/receive-real-time-telemetry.md
Title: Receive real-time telemetry - Azure Orbital
-description: Learn how to receive real-time telemetry during contacts.
+description: Learn how to receive real-time telemetry during contacts. Configure your contact profile to send telemetry events to Azure Event Hubs.
Last updated 07/12/2022-+ -
+
# Receive real-time antenna telemetry Azure Orbital Ground station emits antenna telemetry events that can be used to analyze ground station operation during a contact. You can configure your contact profile to send telemetry events to [Azure Event Hubs](../event-hubs/event-hubs-about.md)..
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
Title: Azure Orbital Ground Station - register spacecraft
-description: Learn how to register a spacecraft.
+description: Learn how to register a spacecraft. To contact a satellite, it must be registered and authorized as a spacecraft resource with Azure Orbital Ground Station
Last updated 07/13/2022-+ # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
orbital Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/resource-graph-samples.md
Title: Azure Resource Graph Sample Queries for Azure Orbital Ground Station
-description: Provides a collection of Azure Resource Graph sample queries for Azure Orbital Ground Station.
+ Title: Sample Queries for Azure Orbital Ground Station
+description: Provides a sample collection of Azure Resource Graph queries to be used for Azure Orbital Ground Station.
Last updated 09/08/2023-+
+#customer intent: As a satellite operator, I need contact info
-# Azure Resource Graph sample queries for Azure Orbital Ground Station
+# Sample Resource Graph Queries for Azure Orbital Ground Station
This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries for Azure Orbital Ground Station. For a complete list of Azure Resource Graph samples, see [Resource Graph samples by Category](../governance/resource-graph/samples/samples-by-category.md) and [Resource Graph samples by Table](../governance/resource-graph/samples/samples-by-table.md).
OrbitalResources
| project Contact = tostring(name), Groundstation = tostring(properties.groundStationName), Spacecraft, Contact_Profile, Reservation_Start_Time = todatetime(properties.reservationStartTime), Reservation_End_Time = todatetime(properties.reservationEndTime), Status=properties.status, Provisioning_Status=properties.provisioningState ```
-### List Contacts from Past ΓÇÿxΓÇÖ Days
+### List Contacts from Past "x" Days
#### Sorted by reservation start time
OrbitalResources
| project Contact = tostring(name), Groundstation = tostring(properties.groundStationName), Spacecraft, Contact_Profile, Reservation_Start_Time = todatetime(properties.reservationStartTime), Reservation_End_Time = todatetime(properties.reservationEndTime), Status=properties.status, Provisioning_Status=properties.provisioningState ```
-#### On a specified ground station
+#### Filtered for a specified ground station
This query will help customers track all the past contacts sorted by reservation start time for a specified ground station.
OrbitalResources
| project Contact = tostring(name), Groundstation = tostring(properties.groundStationName), Spacecraft, Contact_Profile, Reservation_Start_Time = todatetime(properties.reservationStartTime), Reservation_End_Time = todatetime(properties.reservationEndTime), Status=properties.status, Provisioning_Status=properties.provisioningState ```
-#### On specified contact profile
+#### Filtered for a specified contact profile
This query will help customers track all the past contacts sorted by reservation start time for a specified contact profile.
orbital Sar Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/sar-reference-architecture.md
Title: Process Synthetic Aperture Radar (SAR) data - Azure Orbital Analytics
+ Title: Process Synthetic Aperture Radar (SAR) data
description: View a reference architecture that enables processing SAR/Remote Sensing data on Azure by using Apache Spark on Azure Synapse. -+ Last updated 10/20/2022-+ # Process Synthetic Aperture Radar (SAR) data in Azure
container, and then run at scale. While the performance of processing a given im
pipeline that utilizes vendor provided binaries and/or open-source software. While processing of any individual file or image won't occur any faster, many files can be processed simultaneously in parallel. With the flexibility of AKS, each step in the pipeline can execute on the hardware best suited for the tool, for example, GPU, high core count, or increased memory. Raw products are received by a ground station application, which, in turn, writes the data into Azure Blob Storage. Using an Azure Event Grid subscription, a notification is supplied to Azure Event Hubs when a new product image is written to blob storage. Argo Events, running on Azure Kubernetes Service, subscribes to the Azure Event Hubs notification and upon receipt of the event, triggers an Argo Workflows workflow to process the image.
Remote Sensing Data is sent to a ground station. The ground station app collects
Under this approach using Apache Spark, we're gluing the library that has algorithms with JNA. JNA requires you to define the interfaces for your native code and does the heavy lifting to converting your data to and from the native library to usable Java Types. Now without any major rewriting, we can distribute the computation of data on nodes vs a single machine. Typical Spark Execution under this model looks like as follows. ## Considerations
orbital Schedule Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/schedule-contact.md
Title: Azure Orbital Ground Station - schedule a contact
-description: Learn how to schedule a contact.
+description: Learn how to schedule a contact with your satellite for data retrieval and delivery on Azure Orbital Ground Station.
Last updated 12/06/2022-+ # Customer intent: As a satellite operator, I want to schedule a contact to ingest data from my satellite into Azure.
orbital Spacecraft Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/spacecraft-object.md
Title: Spacecraft resource - Azure Orbital Ground Station
-description: Learn about how you can represent your spacecraft details in Azure Orbital Ground Station.
+description: Learn about how you can represent your spacecraft details--Ephemeris, Links, and Authorizations--in Azure Orbital Ground Station.
Last updated 07/13/2022-+ #Customer intent: As a satellite operator or user, I want to understand what the spacecraft resource does so I can manage my mission.
orbital Update Tle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/update-tle.md
Title: Azure Orbital Ground Station - update spacecraft TLE
-description: Update the TLE of an existing spacecraft resource.
+description: Update the spacecraft Two-Line Element (TLE) of an existing spacecraft resource before you schedule a contact.
Last updated 12/06/2022-+ # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
orbital Virtual Rf Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/virtual-rf-tutorial.md
Title: Azure Orbital Ground Station - Understand virtual RF through demodulation of Aqua using GNU Radio
+ Title: Virtual RF via demodulation of Aqua using GNU Radio
description: Learn how to use virtual RF (vRF) instead of a managed modem. Receive a raw RF signal from NASA's Aqua public satellite and process it in GNU Radio. Last updated 04/21/2023-+ # Customer intent: As an Azure Orbital customer I want to understand documentation for virtual RF.
reliability Cross Region Replication Azure No Pair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure-no-pair.md
To achieve geo-replication in nonpaired regions:
## Azure Virtual Machines
-To achieve geo-replication in nonpaired regions, [Azure Site Recovery](/azure/site-recovery/azure-to-azure-enable-global-disaster-recovery) service can be sued. Azure Site Recovery is the Disaster Recovery service from Azure that provides business continuity and disaster recovery by replicating workloads from the primary location to the secondary location. The secondary location can be a nonpaired region if supported by Azure Site Recovery.
+To achieve geo-replication in nonpaired regions, use [Azure Site Recovery](/azure/site-recovery/azure-to-azure-enable-global-disaster-recovery) service. Azure Site Recovery is the Disaster Recovery service from Azure that provides business continuity and disaster recovery by replicating workloads from the primary location to the secondary location. The secondary location can be a nonpaired region if supported by Azure Site Recovery.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
For more information about the codeless connector platform, see [Create a codele
- [Microsoft Purview Information Protection](data-connectors/microsoft-purview-information-protection.md) - [Network Security Groups](data-connectors/network-security-groups.md) - [Microsoft 365](data-connectors/microsoft-365.md)-- [Security Events via Legacy Agent](data-connectors/security-events-via-legacy-agent.md) - [Windows Security Events via AMA](data-connectors/windows-security-events-via-ama.md) - [Azure Service Bus](data-connectors/azure-service-bus.md) - [Azure Stream Analytics](data-connectors/azure-stream-analytics.md)
For more information about the codeless connector platform, see [Create a codele
- [MailRisk by Secure Practice (using Azure Functions)](data-connectors/mailrisk-by-secure-practice.md)
-## SecurityBridge
--- [SecurityBridge Threat Detection for SAP](data-connectors/securitybridge-threat-detection-for-sap.md)- ## Senserva, LLC - [SenservaPro (Preview)](data-connectors/senservapro.md)
sentinel Security Events Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/security-events-via-legacy-agent.md
- Title: "Security Events via Legacy Agent connector for Microsoft Sentinel"
-description: "Learn how to install the connector Security Events via Legacy Agent to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# Security Events via Legacy Agent connector for Microsoft Sentinel
-
-You can stream all security events from the Windows machines connected to your Microsoft Sentinel workspace using the Windows agent. This connection enables you to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organizationΓÇÖs network and improves your security operation capabilities. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2220093&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | SecurityEvents<br/> |
-| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
-| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
--
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-securityevents?tab=Overview) in the Azure Marketplace.
sentinel Securitybridge Threat Detection For Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/securitybridge-threat-detection-for-sap.md
- Title: "SecurityBridge Threat Detection for SAP connector for Microsoft Sentinel"
-description: "Learn how to install the connector SecurityBridge Threat Detection for SAP to connect your data source to Microsoft Sentinel."
-- Previously updated : 04/26/2024-----
-# SecurityBridge Threat Detection for SAP connector for Microsoft Sentinel
-
-SecurityBridge is the first and only holistic, natively integrated security platform, addressing all aspects needed to protect organizations running SAP from internal and external threats against their core business applications. The SecurityBridge platform is an SAP-certified add-on, used by organizations around the globe, and addresses the clientsΓÇÖ need for advanced cybersecurity, real-time monitoring, compliance, code security, and patching to protect against internal and external threats.This Microsoft Sentinel Solution allows you to integrate SecurityBridge Threat Detection events from all your on-premise and cloud based SAP instances into your security monitoring.Use this Microsoft Sentinel Solution to receive normalized and speaking security events, pre-built dashboards and out-of-the-box templates for your SAP security monitoring.
-
-This is autogenerated content. For changes, contact the solution provider.
-
-## Connector attributes
-
-| Connector attribute | Description |
-| | |
-| **Log Analytics table(s)** | SecurityBridgeLogs_CL<br/> |
-| **Data collection rules support** | Not currently supported |
-| **Supported by** | [Christoph Nagy](https://securitybridge.com/contact/) |
-
-## Query samples
-
-**Top 10 Event Names**
-
- ```kusto
-SecurityBridgeLogs_CL
-
- | extend Name = tostring(split(RawData, '
- |')[5])
-
- | summarize count() by Name
- | top 10 by count_
- ```
---
-## Vendor installation instructions
--
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-SecurityBridgeLogs-parser) to create the Kusto Functions alias, **SecurityBridgeLogs**
--
-> [!NOTE]
- > This data connector has been developed using SecurityBridge Application Platform 7.4.0.
-
-1. Install and onboard the agent for Linux or Windows
-
-This solution requires logs collection via an Microsoft Sentinel agent installation
-
-> The Sentinel agent is supported on the following Operating Systems:
-1. Windows Servers
-2. SUSE Linux Enterprise Server
-3. Redhat Linux Enterprise Server
-4. Oracle Linux Enterprise Server
-5. If you have the SAP solution installed on HPUX / AIX then you will need to deploy a log collector on one of the Linux options listed above and forward your logs to that collector
------
-2. Configure the logs to be collected
-
-Configure the custom log directory to be collected
---
-1. Select the link above to open your workspace advanced settings
-2. Click **+Add custom**
-3. Click **Browse** to upload a sample of a SecurityBridge SAP log file (e.g. AED_20211129164544.cef). Then, click **Next >**
-4. Select **New Line** as the record delimiter then click **Next >**
-5. Select **Windows** or **Linux** and enter the path to SecurityBridge logs based on your configuration. Example:
-
->**NOTE:** You can add as many paths as you want in the configuration.
-
-6. After entering the path, click the '+' symbol to apply, then click **Next >**
-7. Add **SecurityBridgeLogs** as the custom log Name and click **Done**
-
-3. Check logs in Microsoft Sentinel
-
-Open Log Analytics to check if the logs are received using the SecurityBridgeLogs_CL Custom log table.
-
->**NOTE:** It may take up to 30 minutes before new logs will appear in SecurityBridgeLogs_CL table.
---
-## Next steps
-
-For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/securitybridge1647511278080.securitybridge-sentinel-app-1?tab=Overview) in the Azure Marketplace.
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
Previously updated : 08/07/2024 Last updated : 10/03/2024 ms.devlang: csharp
For a step-by-step tutorial that leads you through the process of encrypting blo
## About client-side encryption
-The Azure Blob Storage client library uses [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in order to encrypt user data. There are two versions of client-side encryption available in the client library:
+The Azure Blob Storage client library uses [Advanced Encryption Standard (AES)](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) to encrypt user data. There are two versions of client-side encryption available in the client library:
- Version 2 uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES. - Version 1 uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES.
Due to a security vulnerability discovered in the Blob Storage client library's
- If you need to use client-side encryption, then migrate your applications from client-side encryption v1 to client-side encryption v2.
-The following table summarizes the steps you'll need to take if you choose to migrate your applications to client-side encryption v2:
+The following table summarizes the steps to take if you choose to migrate your applications to client-side encryption v2:
| Client-side encryption status | Recommended actions | |||
-| Application is using client-side encryption a version of the client library that supports only client-side encryption v1. | Update your application to use a version of the client library that supports client-side encryption v2. See [SDK support matrix for client-side encryption](#sdk-support-matrix-for-client-side-encryption) for a list of supported versions. [Learn more...](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md)<br/><br/>Update your code to use client-side encryption v2. [Learn more...](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2)<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. [Learn more...](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2) |
-| Application is using client-side encryption with a version of the client library that supports client-side encryption v2. | Update your code to use client-side encryption v2. [Learn more...](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2)<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. [Learn more...](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2) |
+| Application is using client-side encryption a version of the client library that supports only client-side encryption v1. | Update your application to use a version of the client library that supports client-side encryption v2. See [SDK support matrix for client-side encryption](#sdk-support-matrix-for-client-side-encryption) for a list of supported versions. [Learn more...](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md)<br/><br/>Update your code to use client-side encryption v2. [Learn more...](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2)<br/><br/>Download any encrypted data to decrypt it, then re-encrypt it with client-side encryption v2. [Learn more...](#re-encrypt-previously-encrypted-data-with-client-side-encryption-v2) |
+| Application is using client-side encryption with a version of the client library that supports client-side encryption v2. | Update your code to use client-side encryption v2. [Learn more...](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2)<br/><br/>Download any encrypted data to decrypt it, then re-encrypt it with client-side encryption v2. [Learn more...](#re-encrypt-previously-encrypted-data-with-client-side-encryption-v2) |
Additionally, Microsoft recommends that you take the following steps to help secure your data:
Additionally, Microsoft recommends that you take the following steps to help sec
### SDK support matrix for client-side encryption
-The following table shows which versions of the client libraries for .NET, Java, and Python support which versions of client-side encryption:
+The following table shows which versions of the client libraries for .NET, Java, and Python support different versions of client-side encryption:
| | .NET | Java | Python |
-|--|--|--|--|
+| | | | |
| **Client-side encryption v2 and v1** | [Versions 12.13.0 and later](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Versions 12.18.0 and later](https://search.maven.org/artifact/com.azure/azure-storage-blob) | [Versions 12.13.0 and later](https://pypi.org/project/azure-storage-blob) | | **Client-side encryption v1 only** | Versions 12.12.0 and earlier | Versions 12.17.0 and earlier | Versions 12.12.0 and earlier |
-If your application is using client-side encryption with an earlier version of the .NET, Java, or Python client library, you must first upgrade your code to a version that supports client-side encryption v2. Next, you must decrypt and re-encrypt your data with client-side encryption v2. If necessary, you can use a version of the client library that supports client-side encryption v2 side-by-side with an earlier version of the client library while you are migrating your code. For code examples, see [Example: Encrypting and decrypting a blob with client-side encryption v2](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2).
+> [!NOTE]
+> Client-side encryption v2.1 is available in the Java SDK for versions 12.27.0 and later. This version allows you to configure the region length for authenticated encryption, from 16 bytes to 1 GiB. For more information, see the Java example at [Example: Encrypting and decrypting a blob with client-side encryption v2](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2).
+
+If your application is using client-side encryption with an earlier version of the .NET, Java, or Python client library, you must first upgrade your code to a version that supports client-side encryption v2. Next, you must decrypt and re-encrypt your data with client-side encryption v2. If necessary, you can use a version of the client library that supports client-side encryption v2 side-by-side with an earlier version of the client library while you're migrating your code. For code examples, see [Example: Encrypting and decrypting a blob with client-side encryption v2](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2).
## How client-side encryption works
Decryption via the envelope technique works as follows:
1. The Azure Storage client library assumes that the user is managing the KEK either locally or in an Azure Key Vault. The user doesn't need to know the specific key that was used for encryption. Instead, a key resolver that resolves different key identifiers to keys can be set up and used. 1. The client library downloads the encrypted data along with any encryption material that is stored in Azure Storage.
-1. The wrapped CEK) is then unwrapped (decrypted) using the KEK. The client library doesn't have access to the KEK during this process, but only invokes the unwrapping algorithm of the Azure Key Vault or other key store.
+1. The wrapped CEK is then unwrapped (decrypted) using the KEK. The client library doesn't have access to the KEK during this process, but only invokes the unwrapping algorithm of the Azure Key Vault or other key store.
1. The client library uses the CEK to decrypt the encrypted user data. ### Encryption/decryption on blob upload/download
-The Blob Storage client library supports encryption of whole blobs only on upload. For downloads, both complete and range downloads are supported. Client-side encryption v2 chunks data into 4MB buffered authenticated encryption blocks which can only be transformed whole. To adjust the chunk size, ensure you are using the most recent version of the SDK that supports client-side encryption v2.1. The region length is configurable from 16 bytes up to 1 GiB.
+The Blob Storage client library supports encryption of whole blobs only on upload. For downloads, both complete and range downloads are supported. Client-side encryption v2 chunks data into 4 MiB buffered authenticated encryption blocks which can only be transformed whole. To adjust the chunk size, make sure you're using the most recent version of the SDK that supports client-side encryption v2.1. The region length is configurable from 16 bytes up to 1 GiB.
During encryption, the client library generates a random initialization vector (IV) of 16 bytes and a random CEK of 32 bytes, and performs envelope encryption of the blob data using this information. The wrapped CEK and some additional encryption metadata are then stored as blob metadata along with the encrypted blob. When a client downloads an entire blob, the wrapped CEK is unwrapped and used together with the IV to return the decrypted data to the client.
-Downloading an arbitrary range in the encrypted blob involves adjusting the range provided by users in order to get a small amount of additional data that can be used to successfully decrypt the requested range.
+Downloading an arbitrary range in the encrypted blob involves adjusting the range provided by users to get a small amount of additional data that can be used to successfully decrypt the requested range.
All blob types (block blobs, page blobs, and append blobs) can be encrypted/decrypted using this scheme.
All blob types (block blobs, page blobs, and append blobs) can be encrypted/decr
The code example in this section shows how to use client-side encryption v2 to encrypt and decrypt a blob. > [!IMPORTANT]
-> If you have data that has been previously encrypted with client-side encryption v1, then you'll need to decrypt that data and reencrypt it with client-side encryption v2. See the guidance and sample for your client library below.
+> If you have data that has been previously encrypted with client-side encryption v1, then you'll need to decrypt that data and re-encrypt it with client-side encryption v2. See the guidance and sample for your client library below.
### [.NET](#tab/dotnet)
-To use client-side encryption from your .NET code, reference the [Blob Storage client library](/dotnet/api/overview/azure/storage.blobs-readme). Make sure that you are using version 12.13.0 or later. If you need to migrate from version 11.x to version 12.13.0, see the [Migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md).
+To use client-side encryption from your .NET code, reference the [Blob Storage client library](/dotnet/api/overview/azure/storage.blobs-readme). Make sure that you're using version 12.13.0 or later. If you need to migrate from version 11.x to version 12.13.0, see the [Migration guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md).
Two additional packages are required for Azure Key Vault integration for client-side encryption: - The **Azure.Core** package provides the `IKeyEncryptionKey` and `IKeyEncryptionKeyResolver` interfaces. The Blob Storage client library for .NET already defines this assembly as a dependency.-- The **Azure.Security.KeyVault.Keys** package (version 4.x and later) provides the Key Vault REST client and the cryptographic clients that are used with client-side encryption. You'll need to ensure that this package is referenced in your project if you're using Azure Key Vault as your key store.
+- The **Azure.Security.KeyVault.Keys** package (version 4.x and later) provides the Key Vault REST client and the cryptographic clients that are used with client-side encryption. Make sure that this package is referenced in your project if you're using Azure Key Vault as your key store.
Azure Key Vault is designed for high-value master keys, and throttling limits per key vault reflect this design. As of version 4.1.0 of Azure.Security.KeyVault.Keys, the `IKeyEncryptionKeyResolver` interface doesn't support key caching. Should caching be necessary due to throttling, you can use the approach demonstrated in [this sample](/samples/azure/azure-sdk-for-net/azure-key-vault-proxy/) to inject a caching layer into an `Azure.Security.KeyVault.Keys.Cryptography.KeyResolver` instance. Developers can provide a key, a key resolver, or both a key and a key resolver. Keys are identified using a key identifier that provides the logic for wrapping and unwrapping the CEK. A key resolver is used to resolve a key during the decryption process. The key resolver defines a resolve method that returns a key given a key identifier. The resolver provides users the ability to choose between multiple keys that are managed in multiple locations.
-On encryption, the key is used always and the absence of a key will result in an error.
+On encryption, the key is always used and the absence of a key results in an error.
On decryption, if the key is specified and its identifier matches the required key identifier, that key is used for decryption. Otherwise, the client library attempts to call the resolver. If there's no resolver specified, then the client library throws an error. If a resolver is specified, then the key resolver is invoked to get the key. If the resolver is specified but doesn't have a mapping for the key identifier, then the client library throws an error.
-To use client-side encryption, create a **ClientSideEncryptionOptions** object and set it on client creation with **SpecializedBlobClientOptions**. You can't set encryption options on a per-API basis. Everything else will be handled by the client library internally.
+To use client-side encryption, create a **ClientSideEncryptionOptions** object and set it on client creation with **SpecializedBlobClientOptions**. You can't set encryption options on a per-API basis. Everything else is handled by the client library internally.
```csharp // Your key and key resolver instances, either through Azure Key Vault SDK or an external implementation.
ClientSideEncryptionOptions encryptionOptions;
BlobClient clientSideEncryptionBlob = plaintextBlob.WithClientSideEncryptionOptions(encryptionOptions); ```
-After you update your code to use client-side encryption v2, make sure that you deencrypt and reencrypt any existing encrypted data, as described in [Reencrypt previously encrypted data with client-side encryption v2](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2).
+After you update your code to use client-side encryption v2, make sure that you decrypt and re-encrypt any existing encrypted data, as described in [Re-encrypt previously encrypted data with client-side encryption v2](#re-encrypt-previously-encrypted-data-with-client-side-encryption-v2).
### [Java](#tab/java)
-To use client-side encryption from your Java code, reference the [Blob Storage client library](/jav).
+To use client-side encryption from your Java code, reference the [Blob Storage client library](/jav).
+
+To use client-side encryption v2.1, include a dependency on `azure-storage-blob-cryptography` version 12.27.0 or later. Client-side encryption v2 has a fixed chunk size of 4 MiB, while v2.1 includes the ability to configure the region length for authenticated encryption. The region length is configurable from 16 bytes up to 1 GiB.
+
+To use client-side encryption v2.1, create a [BlobClientSideEncryptionOptions](/java/api/com.azure.storage.blob.specialized.cryptography.blobclientsideencryptionoptions) instance and optionally set the region length using the `setAuthenticatedRegionDataLengthInBytes` method. Then pass the encryption options to the [EncryptedBlobClientBuilder](/java/api/com.azure.storage.blob.specialized.cryptography.encryptedblobclientbuilder) constructor.
+
+Add the following `import` directives to your code:
+
+```java
+import com.azure.core.cryptography.*;
+import com.azure.storage.blob.specialized.cryptography.*;
+```
+
+The following code example shows how to use client-side encryption v2.1 to encrypt a blob for upload:
+
+```java
+// Your key instance, either through Azure Key Vault SDK or an external implementation
+AsyncKeyEncryptionKey keyEncryptionKey;
+AsyncKeyEncryptionKeyResolver keyResolver;
+String keyWrapAlgorithm = "algorithm name";
+
+// Sets the region length to 4 KiB
+BlobClientSideEncryptionOptions encryptionOptions = new BlobClientSideEncryptionOptions()
+ .setAuthenticatedRegionDataLengthInBytes(1024 * 4);
+
+EncryptedBlobClient ebc = new EncryptedBlobClientBuilder(EncryptionVersion.V2_1)
+ .blobClient(client)
+ .key(key, keyWrapAlgorithm)
+ .keyResolver(keyResolver)
+ .credential(new DefaultAzureCredentialBuilder().build())
+ .endpoint("https://<storage-account-name>.blob.core.windows.net/")
+ .clientSideEncryptionOptions(encryptionOptions)
+ .buildEncryptedBlobClient();
+
+ebc.upload(BinaryData.fromString("sample data"));
+```
-For sample code that shows how to use client-side encryption v2 from Java, see [ClientSideEncryptionV2Uploader.java](https://github.com/wastore/azure-storage-samples-for-java/blob/f1621c807a4b2be8b6e04e226cbf0a288468d7b4/ClientSideEncryptionMigration/src/main/java/ClientSideEncryptionV2Uploader.java).
+To learn more about the library used for client-side encryption, see [Azure Storage Blobs Cryptography client library for Java](/java/api/overview/azure/storage-blob-cryptography-readme).
-After you update your code to use client-side encryption v2, make sure that you deencrypt and reencrypt any existing encrypted data, as described in [Reencrypt previously encrypted data with client-side encryption v2](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2).
+If you're migrating from client-side encryption v1, make sure that you decrypt and re-encrypt any existing encrypted data, as described in [Re-encrypt previously encrypted data with client-side encryption v2](#re-encrypt-previously-encrypted-data-with-client-side-encryption-v2).
### [Python](#tab/python)
-To use client-side encryption from your Python code, reference the [Blob Storage client library](/python/api/overview/azure/storage-blob-readme). Make sure that you are using version 12.13.0 or later. If you need to migrate from an earlier version of the Python client library, see the [Blob Storage migration guide for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/migration_guide.md).
+To use client-side encryption from your Python code, reference the [Blob Storage client library](/python/api/overview/azure/storage-blob-readme). Make sure that you're using version 12.13.0 or later. If you need to migrate from an earlier version of the Python client library, see the [Blob Storage migration guide for Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/storage/azure-storage-blob/migration_guide.md).
The following example shows how to use client-side migration v2 from Python:
blob_client = blob_service_client.get_blob_client(container=container_name, blob
blob_client.upload_blob(stream, overwrite=OVERWRITE_EXISTING) ```
-After you update your code to use client-side encryption v2, make sure that you deencrypt and reencrypt any existing encrypted data, as described in [Reencrypt previously encrypted data with client-side encryption v2](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2).
+After you update your code to use client-side encryption v2, make sure that you decrypt and re-encrypt any existing encrypted data, as described in [Re-encrypt previously encrypted data with client-side encryption v2](#re-encrypt-previously-encrypted-data-with-client-side-encryption-v2).
-## Reencrypt previously encrypted data with client-side encryption v2
+## Re-encrypt previously encrypted data with client-side encryption v2
-Any data that was previously encrypted with client-side encryption v1 must be decrypted and then reencrypted with client-side encryption v2 to mitigate the security vulnerability. Decryption requires downloading the data and reencryption requires reuploading it to Blob Storage.
+Any data that was previously encrypted with client-side encryption v1 must be decrypted and then re-encrypted with client-side encryption v2 to mitigate the security vulnerability. Decryption requires downloading the data and re-encryption requires reuploading it to Blob Storage.
### [.NET](#tab/dotnet)
storage Storage Quickstart Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-powershell.md
Upload as many files as you like before continuing.
## List the blobs in a container
-Get a list of blobs in the container by using [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob). This example shows just the names of the blobs uploaded.
+Get a list of blobs in the container by using [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob). This example lists the names of the blobs uploaded.
-```azurepowershell-intereactive
+```azurepowershell-interactive
Get-AzStorageBlob -Container $ContainerName -Context $Context | Select-Object -Property Name ```
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
As the conversion request is evaluated and processed, the status should progress
| In Progress<sup>1</sup> | The conversion is in progress. | | Completed<br>**- or -**</br>Failed<sup>2</sup> | The conversion is completed successfully.<br>**- or -**</br>The conversion failed. |
-<sup>1</sup> Once initiated, the conversion could take up to 72 hours to begin. If the conversion doesn't enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. For more information about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).<br />
+<sup>1</sup> After initiation, a convsersion typically begins within 72 hours but may take longer in some cases. For more information about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).<br />
<sup>2</sup> If the conversion fails, submit a support request to Microsoft to determine the reason for the failure.<br /> > [!NOTE]
The following table provides an overview of redundancy options available for sto
| ZRS Classic<sup>4</sup><br /><sub>(available in standard general purpose v1 accounts)</sub> | &#x2705; | | | | |
-<sup>1</sup> Conversion for premium file shares is only available by [opening a support request](#support-initiated-conversion); [Customer-initiated conversion](#customer-initiated-conversion) isn't currently supported.<br />
+<sup>1</sup> Conversion for premium file shares is available by [opening a support request](#support-initiated-conversion); Customer-initiated conversion can be undertaken using either [PowerShell](redundancy-migration.md?tabs=powershell#customer-initiated-conversion) or the [Azure CLI](redundancy-migration.md?tabs=azure-cli#customer-initiated-conversion).<br />
<sup>2</sup> Managed disks are available for LRS and ZRS, though ZRS disks have some [limitations](/azure/virtual-machines/disks-redundancy#limitations). If an LRS disk is regional (no zone specified), it can be converted by [changing the SKU](/azure/virtual-machines/disks-convert-types). If an LRS disk is zonal, then it can only be manually migrated by following the process in [Migrate your managed disks](../../reliability/migrate-vm.md#migrate-your-managed-disks). You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](/azure/virtual-machines/managed-disks-overview#integration-with-availability-sets).<br /> <sup>3</sup> If your storage account is v1, you need to upgrade it to v2 before performing a conversion. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).<br /> <sup>4</sup> ZRS Classic storage accounts are deprecated. For information about converting ZRS Classic accounts, see [Converting ZRS Classic accounts](#converting-zrs-classic-accounts).<br />
You can't convert storage accounts to zone-redundancy (ZRS, GZRS or RA-GZRS) if
After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). [Initiate the failover](storage-initiate-account-failover.md#initiate-the-failover).
-If you performed a customer-managed account failover to recover from an outage for your GRS or RA-GRS account, the account becomes locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported, even for so-called failback operations. For example, if you perform an account failover from RA-GRS to LRS in the secondary region, and then configure it again as RA-GRS, it remains LRS in the new secondary region (the original primary). If you then perform another account failover to failback to the original primary region, it remains LRS again in the original primary. In this case, you can't perform a conversion to ZRS, GZRS or RA-GZRS in the primary region. Instead, perform a manual migration to add zone-redundancy.
+If you performed a customer-managed account failover to recover from an outage for your GRS or RA-GRS account, the account becomes locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported. Instead, perform a manual migration to add zone-redundancy.
## Downtime requirements
If you choose to perform a manual migration, downtime is required but you have m
## Timing and frequency
-If you initiate a zone-redundancy [conversion](#customer-initiated-conversion) from the Azure portal, the conversion process could take up to 72 hours to begin. It could take longer to start if you [request a conversion by opening a support request](#support-initiated-conversion). If a customer-initiated conversion doesn't enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. To monitor the progress of a customer-initiated conversion, see [Monitoring customer-initiated conversion progress](#monitoring-customer-initiated-conversion-progress).
+If you initiate a zone-redundancy [conversion](#customer-initiated-conversion) from the Azure portal, the conversion process could take up to 72 hours to begin. It could take longer to start if you [request a conversion by opening a support request](#support-initiated-conversion). To monitor the progress of a customer-initiated conversion, see [Monitoring customer-initiated conversion progress](#monitoring-customer-initiated-conversion-progress).
> [!IMPORTANT] > There is no SLA for completion of a conversion. If you need more control over when a conversion begins and finishes, consider a [Manual migration](#manual-migration). Generally, the more data you have in your account, the longer it takes to replicate that data to other zones or regions.
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Microsoft strives to ensure that Azure services are always available. However, u
- [Failover](#plan-for-failover) - [Designing applications for high availability](#design-for-high-availability)
-This article describes the options available for globally redundant storage accounts, and provides recommendations for developing highly available applications and testing your disaster recovery plan.
+This article describes the options available for geo-redundant storage accounts, and provides recommendations for developing highly available applications and testing your disaster recovery plan.
## Choose the right redundancy option
By comparison, zone-redundant storage (ZRS) retains a copy of a storage account
### Geo-redundant storage and failover
-Geo-redundant storage (GRS), geo-zone-redundant storage (GZRS), and read-access geo-zone-redundant storage (RA-GZRS) are examples of globally redundant storage options. When configured to use globally redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region. These regions are located hundreds, or even thousands of miles away. This level of redundancy allows you to recover your data if there's an outage throughout the entire primary region.
+Geo-redundant storage (GRS), geo-zone-redundant storage (GZRS), and read-access geo-zone-redundant storage (RA-GZRS) are examples of geographically redundant storage options. When configured to use geo-redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region. These regions are located hundreds, or even thousands of miles away. This level of redundancy allows you to recover your data if there's an outage throughout the entire primary region.
-Unlike LRS and ZRS, globally redundant storage also provides support for an unplanned failover to a secondary region if there's an outage in the primary region. During the failover process, DNS (Domain Name System) entries for your storage account service endpoints are automatically updated such that the secondary region's endpoints become the new primary endpoints. Once the unplanned failover is complete, clients can begin writing to the new primary endpoints.
+Unlike LRS and ZRS, geo-redundant storage also provides support for an unplanned failover to a secondary region if there's an outage in the primary region. During the failover process, DNS (Domain Name System) entries for your storage account service endpoints are automatically updated such that the secondary region's endpoints become the new primary endpoints. Once the unplanned failover is complete, clients can begin writing to the new primary endpoints.
Read-access geo-redundant storage (RA-GRS) and read-access geo-zone-redundant storage (RA-GZRS) also provide geo-redundant storage, but offer the added benefit of read access to the secondary endpoint. These options are ideal for applications designed for high availability business-critical applications. If the primary endpoint experiences an outage, applications configured for read access to the secondary region can continue to operate. Microsoft recommends RA-GZRS for maximum availability and durability of your storage accounts.
The following features and services aren't supported for customer-managed failov
- Azure File Sync doesn't support customer-managed account failover. Storage accounts used as cloud endpoints for Azure File Sync shouldn't be failed over. Failover disrupts file sync and might cause the unexpected data loss of newly tiered files. For more information, see [Best practices for disaster recovery with Azure File Sync](../file-sync/file-sync-disaster-recovery-best-practices.md#geo-redundancy) for details. - A storage account containing premium block blobs can't be failed over. Storage accounts that support premium block blobs don't currently support geo-redundancy. - Customer-managed failover isn't supported for either the source or the destination account in an [object replication policy](../blobs/object-replication-overview.md).-- Network File System (NFS) 3.0 (NFSv3) isn't supported for storage account failover. You can't create a storage account configured for global-redundancy with NFSv3 enabled.
+- Network File System (NFS) 3.0 (NFSv3) isn't supported for storage account failover. You can't create a storage account configured for geo-redundancy with NFSv3 enabled.
The following table can be used to reference feature support.
Microsoft also recommends that you design your application to prepare for the po
- [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md) - [Azure Storage redundancy](storage-redundancy.md) - [How customer-managed planned failover (preview) works](storage-failover-customer-managed-planned.md)-- [How customer-managed (unplanned) failover works](storage-failover-customer-managed-unplanned.md)
+- [How customer-managed (unplanned) failover works](storage-failover-customer-managed-unplanned.md)
storage Storage Failover Customer Managed Planned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-planned.md
The failback typically takes about an hour.
After the failback is complete, the storage account is restored to its original redundancy configuration. Users can resume writing data to the storage account in the original primary region (1) while replication to the original secondary (2) continues as before the failover: ## [GZRS/RA-GZRS](#tab/gzrs-ra-gzrs)
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
The following diagram shows how your data is replicated with GRS or RA-GRS:
### Geo-zone-redundant storage
-Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three [Azure availability zones](../../availability-zones/az-overview.md) in the primary region. In addition, it also replicates to a secondary geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
+Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS account is copied across three [Azure availability zones](../../availability-zones/az-overview.md) in the primary region. In addition, it also replicates to a secondary geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
-With a GZRS storage account, you can continue to read and write data if an availability zone becomes unavailable or is unrecoverable. Additionally, your data also remains durable during a complete regional outage or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least 99.99999999999999% (16 9s) durability of objects over a given year.
+With a GZRS account, you can continue to read and write data if an availability zone becomes unavailable or is unrecoverable. Additionally, your data also remains durable during a complete regional outage or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least 99.99999999999999% (16 9s) durability of objects over a given year.
The following diagram shows how your data is replicated with GZRS or RA-GZRS:
Azure Storage regularly verifies the integrity of data stored using cyclic redun
- [Azure Files](https://azure.microsoft.com/pricing/details/storage/files/) - [Table Storage](https://azure.microsoft.com/pricing/details/storage/tables/) - [Queue Storage](https://azure.microsoft.com/pricing/details/storage/queues/)
- - [Azure Disks](https://azure.microsoft.com/pricing/details/managed-disks/)
+ - [Azure Disks](https://azure.microsoft.com/pricing/details/managed-disks/)
storage Transport Layer Security Configure Migrate To TLS2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-migrate-to-TLS2.md
ms.devlang: csharp
# Migrate to TLS 1.2 for Azure Blob Storage
-On **Nov 1, 2024**, Azure Blob Storage will stop supporting versions 1.0 and 1.1 of Transport Layer Security (TLS). TLS 1.2 will become the new minimum TLS version. This change impacts all existing and new blob storage accounts, using TLS 1.0 and 1.1 in all clouds. Storage accounts already using TLS 1.2 aren't impacted by this change.
+On **Nov 1, 2025**, Azure Blob Storage will stop supporting versions 1.0 and 1.1 of Transport Layer Security (TLS). TLS 1.2 will become the new minimum TLS version. This change impacts all existing and new blob storage accounts, using TLS 1.0 and 1.1 in all clouds. Storage accounts already using TLS 1.2 aren't impacted by this change.
To avoid disruptions to applications that connect to your storage account, you must ensure that your account requires clients to send and receive data by using TLS **1.2**, and remove dependencies on TLS version 1.0 and 1.1.
storage Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/smb-performance.md
description: Learn about ways to improve performance and throughput for premium
Previously updated : 09/09/2024 Last updated : 10/03/2024
The load was generated against a single 128 GiB file. With SMB Multichannel enab
## Metadata caching for premium SMB file shares
-Metadata caching is an enhancement for SMB Azure premium file shares aimed to reduce metadata latency, increase available IOPS, and boost network throughput. This preview feature improves the following metadata APIs and can be used from both Windows and Linux clients:
+Metadata caching is an enhancement for premium SMB Azure file shares aimed to improve the following:
+
+- Reduce metadata latency
+- Raised metadata scale limits
+- Increase latency consistency, available IOPS, and boost network throughput
+
+This preview feature improves the following metadata APIs and can be used from both Windows and Linux clients:
- Create - Open - Close - Delete
-To onboard, [sign up for the public preview](https://aka.ms/PremiumFilesMetadataCachingPreview) and we'll provide you with additional details. Currently this preview feature is only available for premium SMB file shares (file shares in the FileStorage storage account kind). There are no additional costs associated with using this feature.
+Currently this preview feature is only available for premium SMB file shares (file shares in the FileStorage storage account kind). There are no additional costs associated with using this feature.
+
+### Register for the feature
+To get started, register for the feature using Azure portal or PowerShell.
+
+# [Azure portal](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true).
+2. Search for and select **Preview features**.
+3. Select the **Type** filter and select **Microsoft.Storage**.
+4. Select **Azure Premium Files Metadata Cache Preview** and then select **Register**.
+
+# [Azure PowerShell](#tab/powershell)
+
+To register your subscription using Azure PowerShell, run the following commands. Replace `<your-subscription-id>` and `<your-tenant-id>` with your own values.
+
+```azurepowershell-interactive
+Connect-AzAccount -SubscriptionId <your-subscription-id> -TenantId <your-tenant-id>
+Register-AzProviderFeature -FeatureName AzurePremiumFilesMetadataCacheFeature -ProviderNamespace Microsoft.Storage
+```
+ ### Regional availability
-Currently the metadata caching preview is only available in the following Azure regions.
+Currently the metadata caching preview is only available in the following Azure regions. To request additional region support, [sign up for the public preview](https://aka.ms/PremiumFilesMetadataCachingPreview).
-- Australia East-- Brazil South East-- France South-- Germany West Central
+- Australia Central
+- Jio India West
+- India South
+- Mexico Central
+- Norway East
+- Poland Central
+- Spain Central
+- Sweden Central
- Switzerland North-- UAE Central - UAE North-- US West Central
+- US West 3
+
+> [!TIP]
+> As we extend region support for the Metadata Cache feature, premium file storage accounts in those regions will be automatically onboarded for all subscriptions registered with the Metadata Caching feature.
### Performance improvements with metadata caching
stream-analytics Stream Analytics With Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-with-azure-functions.md
log.LogInformation($"Object put in database. Key is {key} and value is {data[i].
If a failure occurs while sending events to Azure Functions, Stream Analytics retries most operations. All http exceptions are retried until success except for http error 413 (entity too large). An entity too large error is treated as a data error that is subjected to the [retry or drop policy](stream-analytics-output-error-policy.md). > [!NOTE]
-> The timeout for HTTP requests from Stream Analytics to Azure Functions is set to 100 seconds. If your Azure Functions app takes more than 100 seconds to process a batch, Stream Analytics errors out and will rety for the batch.
+> The timeout for HTTP requests from Stream Analytics to Azure Functions is set to 100 seconds. If your Azure Functions app takes more than 100 seconds to process a batch, Stream Analytics errors out and will retry for the batch.
Retrying for timeouts might result in duplicate events written to the output sink. When Stream Analytics retries for a failed batch, it retries for all the events in the batch. For example, consider a batch of 20 events that are sent to Azure Functions from Stream Analytics. Assume that Azure Functions takes 100 seconds to process the first 10 events in that batch. After 100 seconds, Stream Analytics suspends the request since it hasn't received a positive response from Azure Functions, and another request is sent for the same batch. The first 10 events in the batch are processed again by Azure Functions, which causes a duplicate.
In this tutorial, you've created a simple Stream Analytics job that runs an Azur
> [!div class="nextstepaction"] > [Update or merge records in Azure SQL Database with Azure Functions](sql-database-upsert.md)
-> [Run JavaScript user-defined functions within Stream Analytics jobs](stream-analytics-javascript-user-defined-functions.md)
+> [Run JavaScript user-defined functions within Stream Analytics jobs](stream-analytics-javascript-user-defined-functions.md)
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Previously updated : 09/19/2024 Last updated : 10/03/2024 # What's new in Azure Virtual Desktop?
Make sure to check back here often to keep up with new updates.
> [!TIP] > See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop.
+## October 2024
+
+Here's what changed in October 2024:
+
+### Windows 11, version 24H2 images are now available in Azure Marketplace
+
+Windows 11 Enterprise and Windows 11 Enterprise multi-session are now available in the Azure Marketplace. The following updated images are available Windows 11 + Windows 365 apps and Windows 11. The portal will be updated later this month to allow the convenient selection of 24H2 images.
+
+For additional information to configure languages other than English, see [Install language packs on Windows 11 Enterprise VMs in Azure Virtual Desktop](windows-11-language-packs.md).
+ ## September 2024 Here's what changed in September 2024:
+### Relayed RDP Shortpath (TURN) for public networks is now available
+
+This enhancement allows UDP connections via relays using the Traversal Using Relays around NAT (TURN) protocol, extending the functionality of RDP Shortpath on public networks for everyone.
+
+For detailed configuration guidance, including prerequisites and default configurations, seeΓÇ»[Configure RDP Shortpath for Azure Virtual Desktop](configure-rdp-shortpath.md).
+ ### Windows App is now available Windows App is now generally available on Windows, macOS, iOS, iPadOS, and web browsers, and in preview on Android. You can use it to connect to Azure Virtual Desktop, Windows 365, Microsoft Dev Box, Remote Desktop Services, and remote PCs, securely connecting you to Windows devices and apps. To learn more about what each platform supports, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features?toc=admins%2Ftoc.json&pivots=azure-virtual-desktop). Windows App is now available through the appropriate store for each client platform, ensuring a smooth update process.ΓÇ»
virtual-network-manager Concept Ip Address Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-ip-address-management.md
+
+ Title: What is IP address management (IPAM) in Azure Virtual Network Manager?
+description: Learn about IP address management (IPAM) in Azure Virtual Network Manager and how it can help you manage IP addresses in your virtual networks.
++++ Last updated : 10/2/2024+
+#customer intent: As a network administrator, I want to learn about IP address management (IPAM) in Azure Virtual Network Manager so that I can manage IP addresses in my virtual networks.
++
+# What is IP address management (IPAM) in Azure Virtual Network Manager?
++
+In this article, you learn about the IP address management (IPAM) feature in Azure Virtual Network Manager and how it can help you manage IP addresses in your virtual networks. With Azure Virtual Network Manager's IP Address Management (IPAM), you can create pools for IP address planning, automatically assign nonoverlapping classless inter-domain routing (CIDR) addresses to Azure resources, and prevent address space conflicts across on-premises and multicloud environments.
+
+## What is IP address management (IPAM)?
+
+In Azure Virtual Network Manager, IP address management (IPAM) helps you centrally manage IP addresses in your virtual networks using IP address pools. The following are some key features of IPAM in Azure Virtual Network
+
+- Create pools for IP address planning.
+
+- Automatically assign nonoverlapped CIDRs to Azure resources.
+
+- Reserve IPs for specific needs.
+
+- Prevent Azure address space from overlapping on-premises and cloud environments.
+
+- Monitor IP/CIDR usages and allocations in a pool.
+
+- Support for IPv4 and IPv6 address pools.
+
+## How does IPAM work in Azure Virtual Network Manager?
+
+The IPAM feature in Azure Virtual Network Manager works through the following key components:
+- Managing IP Address Pools
+- Allocating IP addresses to Azure resources
+- Delegating IP address management permissions
+- Simplifying resource creation
+
+### Manage IP address pools
+
+IPAM allows network administrators to plan and organize IP address usage by creating pools with address spaces and respective sizes. These pools act as containers for groups of CIDRs, enabling logical grouping for specific networking purposes. You can create a structured hierarchy of pools, dividing a larger pool into smaller, more manageable pools, aiding in more granular control and organization of your network's IP address space.
+
+There are two types of pools in IPAM:
+- Root pool: The first pool created in your instance is the root pool. This represents your entire IP address range.
+- Child pool: A child pool is a subset of the root pool or another child pool. You can create multiple child pools within a root pool or another child pool. You can have up to seven layers of pools
+
+### Allocating IP addresses to Azure resources
+
+When it comes to allocation, you can assign Azure resources with CIDRs, such as virtual networks, to a specific pool. This helps in identifying which CIDRs are currently in use. There's also the option to allocate static CIDRs to a pool, useful for occupying CIDRs that are either not currently in use within Azure or are part of Azure resources not yet supported by the IPAM service. Allocated CIDRs are released back to the pool if the associated resource is removed or deleted, ensuring efficient utilization and management of the IP space.
+
+### Delegating permissions for IP address management
+
+With IPAM, you can delegate permission to other users to utilize the IPAM pools, ensuring controlled access and management while democratizing pool allocation. These permissions allow users to see the pools they have access to, aiding in choosing the right pool for their needs.
+
+Delegating permissions also allows others to view usage statistics and lists of resources associated with the pool. Within your network manager, complete usage statistics are available including:
+ - The total number of IPs in pool.
+ - The percentage of allocated pool space.
+
+Additionally, it shows details for pools and resources associated with pools, giving a complete overview of the IP usages and aiding in better resource management and planning.
+
+### Simplifying resource creation
+
+When creating CIDR-supporting resources like virtual networks, CIDRs are automatically allocated from the selected pool, simplifying the resource creation process. The system ensures that the automatically allocated CIDRs don't overlap within the pool, maintaining network integrity and preventing conflicts.
+
+## Permission requirements for IPAM in Azure Virtual Network Manager
+
+When using IP address management, the **IPAM Pool User** role alone is sufficient for delegation. During the public preview, you also need to grant **Network Manager Read** access to ensure full discoverability of IP address pools and virtual networks across the Network Manager's scope. Without this role, users with only the **IPAM Pool User** role won't be able to see available pools and virtual networks.
+
+Learn more about [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md).
+
+## Known issues
+
+- When virtual networks are associated with an IPAM pool, peering sync may show as out of sync, even though peering is functioning correctly.
+- When a VNet is moved to a different subscription, the references in IPAM are not updated, leading to inconsistent management status.
+- When multiple requests for the same VNet are made, it can result in duplicate allocations entries.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to manage IP addresses in Azure Virtual Network Manager](./how-to-manage-ip-addresses-network-manager.md)
virtual-network-manager How To Manage Ip Addresses Network Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-manage-ip-addresses-network-manager.md
+
+ Title: Manage IP addresses with Azure Virtual Network Manager
+description: Learn how to manage IP addresses with Azure Virtual Network Manager by creating and assigning IP address pools to your virtual networks.
++++ Last updated : 10/2/2024+
+#customer intent: As a network administrator, I want to learn how to manage IP addresses with Azure Virtual Network Manager so that I can create and assign IP address pools to my virtual networks.
++
+# Manage IP addresses with Azure Virtual Network Manager
++
+Azure Virtual Network Manager allows you to manage IP addresses by creating and assigning IP address pools to your virtual networks. This article shows you how to create and assign IP address pools to your virtual networks with IP address management (IPAM) in Azure Virtual Network Manager.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- An existing network manager instance. If you don't have a network manager instance, see [Create a network manager instance](create-virtual-network-manager-portal.md).
+- A virtual network that you want to associate with an IP address pool.
+- To manage IP addresses in your network manager, you have the **Network Contributor** role with [role-based access control](../role-based-access-control/quickstart-assign-role-user-portal.md) Classic Admin/legacy authorization isn't supported.
+
+## Create an IP address pool
+
+In this step, you create an IP address pool for your virtual network.
+
+1. In the Azure portal, search for and select **Network managers**.
+2. Select your network manager instance.
+3. In the left menu, select **IP address pools (Preview)** under **IP address management (Preview)**.
+4. Select **+ Create** or **Create** to create a new IP address pool.
+5. In the **Create an IP address pool** window, enter the following information:
+
+ | Field | Description |
+ | | |
+ | **Name** | Enter a name for the IP address pool. |
+ | **Description** | Enter a description for the IP address pool. |
+ | **Parent pool** | For creating a **root pool**, leave default of **None**. For creating a **child pool**, select the parent pool. |
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/create-root-pool.png" alt-text="Screenshot of Create an ip address pool settings for a root pool." :::
+
+6. Select **Next** or the **IP addresses** tab.
+7. Under **Starting address**, enter the IP address range for the pool.
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/set-pool-ip-range-thumb.png" alt-text="Screenshot of IP address range settings for a root pool." lightbox="media/how-to-manage-ip-addresses/set-pool-ip-range.png":::
+
+8. Select **Review + create** and then **Create** to create the IP address pool.
+9. Repeat these steps for another root or child pool.
+
+## Associate a virtual network with an IP address pool
+
+In this step, you associate an existing virtual network with an IP address pool from the **Allocations** settings page in the IP address pool.
+
+1. Browse to your network manager instance and select your IP address pool.
+2. From the left menu, select **Allocations** under **Settings** or select **Allocate**.
+3. In the **Allocations** window, select **+ Create** > **Associate resources**. The **Associate resources** option allocates a CIDR to an existing virtual network.
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/pool-allocation-settings-associate-resource-thumb.png" alt-text="Screenshot of allocations page for associating resources." lightbox="media/how-to-manage-ip-addresses/pool-allocation-settings-associate-resource.png":::
+
+4. In the **Select resources** window, select the virtual networks you want to associate with the IP address pool and then choose **Select**.
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/associate-virtual-network-resources-thumb.png" alt-text="Screenshot of associate resources page with virtual networks selected." lightbox="media/how-to-manage-ip-addresses/associate-virtual-network-resources.png":::
+
+5. Verify the virtual network is listed.
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/ip-address-pool-allocation-statistics.png" alt-text="Screenshot of IP address pool allocations and statistics.":::
+
+> [!Note]
+> In addition to associating resources, you can allocate address spaces to a child pool or a static CIDR block from the a pool's Allocations page.
+
+## Create static CIDR blocks for a pool
+
+In this step, you create a static CIDR block for a pool. This is helpful for allocating a space that is outside of Azure or Azure resources not supported by IPAM. For example, you can allocate a CIDR in the pool to the address space in your on-premises environment. Likewise, you can also use this for a space that is used by a Virtual WAN hub or Azure VMware Private Cloud.
+
+1. Browse to your IP address pool.
+2. Select **Allocate** or **Allocations** under **Settings**.
+3. In the **Allocations** window, select **+ Create** > **Allocate static CIDRs**.
+4. In the **Allocate static CIDRs from pool** window, enter the following information:
+
+ | Field | Description |
+ | | |
+ | **Name** | Enter a name for the static CIDR block.|
+ | **Description** | Enter a description for the static CIDR block. |
+ | **CIDR** | Enter the CIDR block. |
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/create-static-cidr-reservation.png" alt-text="Screenshot of Allocate static CIDR from pool window with address range for CIDR reservation.":::
+
+5. Select **Allocate**.
+
+## Review allocation usage
+
+In this step, you review the allocation usage of the IP address pool. This helps you understand how the CIDRs are being used in the pool, along with the percentage of the pool that is allocated and the compliance status of the pool.
+
+1. Browse to your IP address pool.
+2. Select **Allocations** under **Settings**.
+3. In the **Allocations** window, you can review all of the statistics for the address pool including:
+
+ | Field | Description |
+ | | |
+ | **Pool address space** | The total address space that is allocated to the pool. |
+ | **Allocated address Space** | The address space that is allocated to the pool. |
+ | **Available address Space** | The address space that is available for allocation. |
+ | **Available address count** | The number of addresses that are allocated to the pool. |
+ | **IP allocation** | The set of IP addresses that are allocated from the pool for potential use. |
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/review-ip-address-pool-allocations.png" alt-text="Screenshot of an IP address pool's allocations and statistics for the pool.":::
+
+4. For each allocation, you can review the following:
+
+ | Field | Description |
+ | | |
+ | **Name** | The name of the allocation. |
+ | **Address space** | The address space that is allocated to the pool. |
+ | **Address count** | The number of addresses that are allocated to the pool. |
+ | **IP allocation** | The set of IP addresses that are allocated from the pool for potential use. |
+ | **Status** | The status of the allocation to the pool. |
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/review-ip-address-pool-allocations-by-resource.png" alt-text="Screenshot of ip address pool allocations highlighting individual resource information.":::
+
+## Delegating permissions for IP address management
+
+In this step, you delegate permissions to other users to manage IP address pools in your network manager using [Azure role-based access control (RBAC)](../role-based-access-control/check-access.md). This allows you to control access to the IP address pools and ensure that only authorized users can manage the pools.
+
+1. Browse to your IP address pool.
+2. In the left menu, select **Access control (IAM)**.
+3. In the **Access control (IAM)** window, select **+ Add**>**Add role assignment**.
+4. Under **Role**, select **IPAM Pool User** through the search bar under the **Job function roles** tab, and then select **Next**.
+5. On the **Members** tab, select how you wish to assign access to the role. You can assign access to a user, group, or service principal, or you can use a managed identity.
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/delegate-ip-address-pool-permissions.png" alt-text="Screenshot of the Add role assignment window with IPAM Pool User selected.":::
+
+6. Choose **+ Select members** and then **Select** the user, group, service principal, or managed identity that you want to assign the role to.
+7. Select **Review + assign** and then **Assign** to delegate permissions to the user.
++
+## Create a virtual network with a nonoverlapping CIDR range
+
+In this step, you create a virtual network with a nonoverlapping CIDR range by allowing IPAM to automatically provide a nonoverlapping CIDR.
+
+1. In the Azure portal, search for and select **Virtual networks**.
+2. Select **+ Create**.
+3. On the **Basics** tab, enter the following information:
+
+ | Field | Description |
+ | | |
+ | **Subscription** | Select the subscription managed by a Network Manager management scope. |
+ | **Resource group** | Select the resource group for the virtual network. |
+ | **Name** | Enter a name for the virtual network. |
+ | **Region** | Select the region for the virtual network. IP address pools must be in the same region as your virtual network in order to be associated.|
+
+4. Select the **IP addresses** tab or **Next** > **Next**.
+5. On the **IP addresses** tab, select **Allocate using IP address pools** checkbox.
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/create-virtual-network-ip-address-pool.png" alt-text="Screenshot of create virtual network window with Allocate using IP address setting.":::
+
+6. In the **Select an IP address pool** window, select the IP address pool that you want to associate with the virtual network and then choose **Save**. You can select at most one IPv4 pool and one IPv6 pool for association to a single virtual network.
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/virtual-network-create-select-ip-address-pool-thumb.png" alt-text="Screenshot of Select an IP address pool with IP address pool selected." lightbox="media/how-to-manage-ip-addresses/virtual-network-create-select-ip-address-pool.png":::
+
+7. From the dropdown menu next to your IP address pool, select the size for the virtual network.
+
+ :::image type="content" source="media/how-to-manage-ip-addresses/virtual-network-create-select-address-space-size.png" alt-text="Screenshot of Create virtual network window with IP address size selection.":::
+
+8. Optionally create subnets referring to the selected pool.
+9. Select **Review + create** and then **Create** to create the virtual network.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [What is IP address management in Azure Virtual Network Manager](./concept-ip-address-management.md)
+
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
Previously updated : 04/24/2023 Last updated : 10/02/2024 # Set up DPDK in a Linux virtual machine
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
- Data Plane Development Kit (DPDK) on Azure offers a faster user-space packet processing framework for performance-intensive applications. This framework bypasses the virtual machineΓÇÖs kernel network stack. In typical packet processing that uses the kernel network stack, the process is interrupt-driven. When the network interface receives incoming packets, there's a kernel interrupt to process the packet and a context switch from the kernel space to the user space. DPDK eliminates context switching and the interrupt-driven method in favor of a user-space implementation that uses poll mode drivers for fast packet processing.
The following distributions from the Azure Marketplace are supported:
| Ubuntu 18.04 | 4.15.0-1014-azure+ | | SLES 15 SP1 | 4.12.14-8.19-azure+ | | RHEL 7.5 | 3.10.0-862.11.6.el7.x86_64+ |
-| CentOS 7.5 | 3.10.0-862.11.6.el7.x86_64+ |
| Debian 10 | 4.19.0-1-cloud+ | The noted versions are the minimum requirements. Newer versions are supported too.
DPDK installation instructions for MANA VMs are available here: [Microsoft Azure
### Install build dependencies
-# [RHEL, CentOS](#tab/redhat)
+# [RHEL](#tab/redhat)
-#### RHEL7.5/CentOS 7.5
+#### RHEL7.5
```bash yum -y groupinstall "Infiniband Support"
When you're running the previous commands on a virtual machine, change *IP_SRC_A
## Install DPDK via system package (not recommended)
-# [RHEL, CentOS](#tab/redhat)
+# [RHEL](#tab/redhat)
```bash sudo yum install -y dpdk