Updates from: 10/26/2024 01:05:12
Service Microsoft Docs article Related commit history on GitHub Change details
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
Previously updated : 08/02/2023 Last updated : 09/03/2024
After the subscription requirement is disabled, the selected API or APIs can be
When API Management receives an API request from a client with a subscription key, it handles the request according to these rules:
-1. Check if it's a valid key associated with an active subscription, either:
+1. Check first if it's a valid key associated with an active subscription, either:
* A subscription scoped to the API * A subscription scoped to a product that's assigned to the API
When API Management receives an API request from a client with a subscription ke
If a valid key for an active subscription at an appropriate scope is provided, access is allowed. Policies are applied depending on the configuration of the policy definition at that scope.
+1. If the key isn't valid but a product exists that includes the API but doesn't require a subscription (an *open* product), ignore the key and handle as an API request without a subscription key (see below).
+ 1. Otherwise, access is denied (401 Access denied error). ### API request without a subscription key When API Management receives an API request from a client without a subscription key, it handles the request according to these rules:
-1. Check first for the existence of a product that includes the API but doesn't require a subscription (an *open* product). If the open product exists, handle the request in the context of the APIs, policies, and access rules configured for the product. An API can be associated with at most one open product.
+1. Check first for the existence of a product that includes the API but doesn't require a subscription (an *open* product). If the open product exists, handle the request in the context of the APIs, policies, and access rules configured for the open product. An API can be associated with at most one open product.
1. If an open product including the API isn't found, check whether the API requires a subscription. If a subscription isn't required, handle the request in the context of that API and operation. 1. If no configured product or API is found, then access is denied (401 Access denied error).
The following table summarizes how the gateway handles API requests with or with
|All products assigned to API require subscription |API requires subscription |API call with subscription key |API call without subscription key | Typical scenarios | |||||-| |✔️ | ✔️ | Access allowed:<br/><br/>• Product-scoped key<br/>• API-scoped key<br/>• All APIs-scoped key<br/>• Service-scoped key<br/><br/>Access denied:<br/><br/>• Other key not scoped to applicable product or API | Access denied | Protected API access using product-scoped or API-scoped subscription |
-|✔️ | ❌ | Access allowed:<br/><br/>• Product-scoped key<br/>• API-scoped key<br/>• All APIs-scoped key<br/>• Service-scoped key<br/><br/>Access denied:<br/><br/>• Other key not scoped to applicable product or API | Access allowed (API context) | • Protected API access with product-scoped subscription<br/><br/>• Anonymous access to API. If anonymous access isn’t intended, configure API-level policies to enforce authentication and authorization. |
+|✔️ | ❌ | Access allowed:<br/><br/>• Product-scoped key<br/>• API-scoped key<br/>• All APIs-scoped key<br/>• Service-scoped key<br/>• Other key not scoped to applicable product or API | Access allowed (API context) | • Protected API access with product-scoped subscription<br/><br/>• Anonymous access to API. If anonymous access isn’t intended, configure API-level policies to enforce authentication and authorization. |
|❌<sup>1</sup> | ✔️ | Access allowed:<br/><br/>• Product-scoped key<br/>• API-scoped key<br/>• All APIs-scoped key<br/>• Service-scoped key<br/><br/>Access denied:<br/><br/>• Other key not scoped to applicable product or API | Access allowed (open product context) | • Protected API access with API-scoped subscription<br/><br/>• Anonymous access to API. If anonymous access isn’t intended, configure with product policies to enforce authentication and authorization |
-|❌<sup>1</sup> | ❌ | Access allowed:<br/><br/>• Product-scoped key<br/>• API-scoped key<br/>• All APIs-scoped key<br/>• Service-scoped key<br/><br/>Access denied:<br/><br/>• Other key not scoped to applicable product or API | Access allowed (open product context) | Anonymous access to API. If anonymous access isn’t intended, configure with product policies to enforce authentication and authorization |
+|❌<sup>1</sup> | ❌ | Access allowed:<br/><br/>• Product-scoped key<br/>• API-scoped key<br/>• All APIs-scoped key<br/>• Service-scoped key<br/>• Other key not scoped to applicable product or API | Access allowed (open product context) | Anonymous access to API. If anonymous access isn’t intended, configure with product policies to enforce authentication and authorization |
<sup>1</sup> An open product exists that's associated with the API. ### Considerations - API access in a product context is the same, whether the product is published or not. Unpublishing the product hides it from the developer portal, but it doesnΓÇÖt invalidate new or existing subscription keys.-- Even if a product or API doesn't require a subscription, a valid key from an active subscription that enables access to the product or API can still be used.
+- If an API doesn't require subscription authentication, any API request that includes a subscription key is treated the same as a request without a subscription key. The subscription key is ignored.
- API access "context" means the policies and access controls that are applied at a particular scope (for example, API or product). ## Next steps
app-service Configure Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-error-pages.md
+
+ Title: Configure error pages on App Service
+description: Learn how to configure a custom error page on App Service
+++ Last updated : 10/14/2024+++
+# Configure error pages on App Service (preview)
+
+This article explains how to configure custom error pages on your web app. With App Service you can configure an error page for specific errors that will be presented to users instead of the default error page.
+
+### Prerequisite
+In this tutorial, we're adding a custom 403 error page to our web app hosted on App Service and test it with an IP restriction. To do so, you need the following:
+- a web app hosted on App Service w/ a Premium SKU
+- an html file under 10 kb in size
+
+## Upload an error page
+For this example, we're uploading and testing a 403 error page to present to the user. Name your html file to match the error code (for example, `403.hmtl`). Once you have your html file prepared, you can upload it to your web app. In the configuration blade, you should see an **Error pages (preview)** tab. Click on this tab to view the error page options. If the options are greyed out, you need to upgrade to at least a Premium SKU to use this feature.
+
+Select the error code that you'd like to upload an error page for and click **Edit**. On the next screen, click the folder icon to select your html file. The file must be in html format and within the 10 kb size limit. Find your .html file and click on the **Upload** button at the bottom of the screen. Notice the Status in the table updates from Not Configured to Configured. Then click **Save** to complete the upload.
+
+## Confirm error page
+Once the custom error page is uploaded and saved, we can trigger and view the page. In this example, we can trigger the 403 error by using an IP restriction.
+
+To set an IP restriction, go to the **Networking** blade and click the **Enabled with access restrictions** link under **Inbound traffic configuration**.
+
+Under the **Site access and rules** section, select the **+Add** button to create an IP restriction.
+
+In the form that follows, you need to change the Action to **Deny** and fill out the **Priority** and **IP Address Block**. In this example, we use the **Inbound address** found on the Networking blade and we're setting it to /0 (for example, `12.123.12.123/0`). This disables all public access when visiting the site.
+
+Once the Add rule form is filled out, select the **Add rule** button. Then click **Save**.
+
+Once saved, you need to restart the site for the changes to take effect. Go to your overview page and select **browse**. You should now see your custom error page load.
+
+## Error codes
+App Service currently supports three types of error codes that are available to customize:
+
+| Error code | description |
+| - | - |
+| 403 | Access restrictions |
+| 502 | Gateway errors |
+| 503 | Service unavailable |
+
+## FAQ
+1. I've uploaded my error page, why doesn't it show when the error is triggered?
+
+Currently, error pages are only triggered when the error is coming from the front end. Errors that get triggered at the app level should still be handled through the app.
+
+2. Why is the error page feature greyed out?
+
+Error pages are currently a Premium feature. You need to use at least a Premium SKU to enable the feature.
application-gateway Application Gateway Backend Health Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md
This behavior can occur for one or more of the following reasons:
Next hop: Azure Firewall private IP address > [!NOTE]
-> If the application gateway is not able to access the CRL endpoints, it marks the backend health status as "unknown" and cause fast update failures. To prevent these issues, check that your application gateway subnet is able to access `crl.microsoft.com` and `crl3.digicert.com`. This can be done by configuring your Network Security Groups to send traffic to the CRL endpoints.
+> If the application gateway is not able to access the CRL endpoints, it might mark the backend health status as "unknown". To prevent these issues, check that your application gateway subnet is able to access `crl.microsoft.com` and `crl3.digicert.com`. This can be done by configuring your Network Security Groups to send traffic to the CRL endpoints.
## Next steps
application-gateway Application Gateway Secure Flag Session Affinity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-secure-flag-session-affinity.md
+
+ Title: Setting HTTPOnly or Secure flag for Session Affinity cookie
+
+description: Learn how to set HTTPOnly or Secure flag for Session Affinity cookie
++++ Last updated : 10/22/2024+++
+# Setting HTTPOnly or Secure flag for Session Affinity cookie
+In this guide you learn to create a Rewrite set for your Application Gateway and configure Secure and HttpOnly [ApplicationGatewayAffinity cookie](configuration-http-settings.md#cookie-based-affinity).
++
+## Prerequisites
+* You must have an Azure subscription. You can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* An existing Application Gateway resource configured with at least one Listener, Rule, Backend Setting and Backend Pool configuration. If you don't have one, you can create one by following the [QuickStart guide](quick-create-portal.md).
+
+## Creating a Rewrite set
+
+1. Sign in to the Azure portal.
+1. Navigate to the required Application Gateway resource.
+1. Select Rewrites in the left pane.
+1. Select Rewrite set.
+1. Under the Name and Association tab
+ 1. Specify a name for this new rewrite set.
+ 1. Select the routing rules for which you wish to rewrite the ApplicationGatewayAffinity cookie's flag.
+ 1. Select Next.
+1. Select "Add rewrite rule"
+ 1. Enter a name for the rewrite rule.
+ 1. Enter a numeric value for Rule Sequence field.
+1. Select "Add condition"
+1. Now open the "If" condition box and use the following details.
+ 1. Type of variable to check - HTTP header
+ 1. Header type - Response header
+ 1. Header name - Common header
+ 1. Common header - Set-Cookie
+ 1. Case-sensitive - No
+ 1. Operator - equal (=)
+ 1. Pattern to match - (.*)
+ 1. To save these details, select **OK**.
+1. Go to the **Then** box to specify action details.
+ 1. Rewrite type - Response header
+ 1. Action type - Set
+ 1. Header name - Common header
+ 1. Common header - Set-Cookie
+ 1. Header value - {http_resp_Set-Cookie_1}; HttpOnly; Secure
+ 1. Select **OK**
+1. Select Update to save the rewrite set configurations.
++
+## Next steps
+[Visit other configurations of a Backend Setting](configuration-http-settings.md)
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
description: This article provides an overview of rewriting HTTP headers and URL
Previously updated : 09/30/2024 Last updated : 10/22/2024
Application Gateway allows you to rewrite selected content of requests and responses. With this feature, you can translate URLs, query string parameters and modify request and response headers. It also allows you to add conditions to ensure that the URL or the specified headers are rewritten only when certain conditions are met. These conditions are based on the request and response information.
-> [!NOTE]
-> HTTP header and URL rewrite features are only available for the [Application Gateway v2 SKU](application-gateway-autoscaling-zone-redundant.md)
-
-## Rewrite types supported
+The HTTP header and URL rewrite features are only available for the [**Application Gateway v2 SKU**](application-gateway-autoscaling-zone-redundant.md).
### Request and response headers
-HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers.
-
-Application Gateway allows you to add, remove, or update HTTP request and response headers while the request and response packets move between the client and backend pools.
+Application Gateway allows you to add, remove, or update HTTP request and response headers while the request and response packets move between the client and backend pools. HTTP headers allow a client and server to pass additional information with a request or response. By rewriting these headers, you can accomplish important tasks, such as adding security-related header fields like HSTS/ X-XSS-Protection, removing response header fields that might reveal sensitive information, and removing port information from X-Forwarded-For headers.
-To learn how to rewrite request and response headers with Application Gateway using Azure portal, see [here](rewrite-http-headers-portal.md).
+You can rewrite all headers in requests and responses, except for the `Connection`, and `Upgrade` headers. You can also use the application gateway to **create custom headers** and add them to the requests and responses being routed through it. To learn how to rewrite request and response headers with Application Gateway using Azure portal, see [here](rewrite-http-headers-portal.md).
![A diagram showing headers in request and response packets.](./media/rewrite-http-headers-url/header-rewrite-overview.png)
-**Supported headers**
-
-You can rewrite all headers in requests and responses, except for the Connection, and Upgrade headers. You can also use the application gateway to create custom headers and add them to the requests and responses being routed through it.
- ### URL path and query string
To learn how to rewrite URL with Application Gateway using Azure portal, see [he
![Diagram that describes the process for rewriting a URL with Application Gateway.](./media/rewrite-http-headers-url/url-rewrite-overview.png)
-## Rewrite actions
-Rewrite actions are used to specify the URL. Request headers or response headers that you want to rewrite and the new URL destination value. The value of a URL or a new or existing header can be set to the following types of values:
+## Understanding Rewrites in Application Gateway
-* Text
-* Request header. To specify a request header, you need to use the syntax {http_req_*headerName*}
-* Response header. To specify a response header, you need to use the syntax {http_resp_*headerName*}
-* Server variable. To specify a server variable, you need to use the syntax {var_*serverVariable*}. See the list of supported server variables
-* A combination of text, a request header, a response header, and a server variable.
+A rewrite set is a collection of a Routing Rule, Condition and Action.
-## Rewrite Conditions
+* **Request routing rule association:** The rewrite configuration is associated to a source listener via its routing rule. When you use a routing rule of the type Basic, the rewrite configuration is associated with its listener and works as a global rewrite. When you use a Path-based routing rule, the rewrite configuration is defined as per the URL path map. In the latter case, it applies only to a specific path area of a site. You can apply a rewrite set to multiple routing rules but a routing rule can have only one rewrite associated with it.
+* **Rewrite Condition:** This is an optional configuration. Based on the conditions that you define, the Application Gateway will evaluate the contents of the HTTP(S) requests and responses. The subsequent "rewrite action" will occur if the HTTP(S) request or response matches this condition. If you associate more than one condition with an action, the action occurs only when all the conditions are met. In other words, it is a logical AND operation.
You can use rewrite conditions to evaluate the content of HTTP(S) requests and responses. This optional configuration enables you to perform a rewrite only when one or more conditions are met. The application gateway uses these types of variables to evaluate the content of requests and responses:
-* HTTP headers in the request
-* HTTP headers in the response
-* Application Gateway server variables
-
-You can use a condition to evaluate whether a specified variable is present, whether a specified variable matches a specific value, or whether a specified variable matches a specific pattern.
+ You can choose the following types to look for a condition:
+ * HTTP header (Request and Response)
+ * Supported [Server variables](#server-variables)
+ A Condition lets you evaluate whether a specified header or variable exists by matching their values through text or a Regex pattern. For advanced rewrite configurations, you can also capture the value of header or server variable for later use under Rewrite Action. Know more about [pattern and capturing](#pattern-matching-and-capturing).
-### Pattern Matching
+* **Rewrite Action:** Rewrite action set allows you to rewrite Headers (Request or Response) or the URL components.
-Application Gateway uses regular expressions for pattern matching in the condition. You should use Regular Expression 2 (RE2) compatible expressions when writing your conditions. If you're running an Application Gateway Web Application Firewall (WAF) with Core Rule Set 3.1 or earlier, you might have issues when using [Perl Compatible Regular Expressions (PCRE)](https://www.pcre.org/). Issues can happen when using lookahead and lookbehind (negative or positive) assertions.
+ An action can have the following value types or their combinations:
+ * Text.
+ * Request header's value - To use a captured request header value, specify the syntax as `{http_req_headerName}`.
+ * Response header's value - To use a captured response header value from the preceding Condition, specify the syntax as `{http_resp_headerName}`. You can use `{capt_header_value_matcher}` when the value is captured from Action Set's "Set-Cookie" response header. Know more about [capture under Action set](#syntax-for-capturing).
+ * Server variable - To use a server variable, specify the syntax as `{var_serverVariable}`. [List of supported Server variables](#server-variables).
+ When using an Action to rewrite a URL, the following operations are supported:
+ * URL path: The new value to be set as the path.
+ * URL Query String: The new value to which the query string must be rewritten.
+ * Re-evaluate path map: Specify if the URL path map must be re-evaluated after rewrite. If kept unchecked, the original URL path will be used to match the path-pattern in the URL path map. If set to true, the URL path map will be re-evaluated to check the match with the rewritten path. Enabling this switch helps in routing the request to a different backend pool post rewrite.
-### Capturing
-To capture a substring for later use, put parentheses around the subpattern that matches it in the condition regex definition. The first pair of parentheses stores its substring in 1, the second pair in 2, and so on. You may use as many parentheses as you like; Perl just keeps defining more numbered variables for you to represent these captured strings. Some examples from [ref](https://docstore.mik.ua/orelly/perl/prog3/ch05_07.htm):
+## Pattern matching and capturing
-* (\d)(\d) # Match two digits, capturing them into groups 1 and 2
+Patten matching and capturing are supported under Condition and Action (under Action, it is supported only for a specific header).
-* (\d+) # Match one or more digits, capturing them all into group 1
+### Pattern matching
+Application Gateway uses regular expressions for pattern matching. You should use Regular Expression 2 (RE2) compatible expressions when writing your pattern matching syntax.
-* (\d)+ # Match a digit one or more times, capturing the last into group 1
+You can use pattern matching under both Condition and Action.
+* **Condition**: This is used to match the values for a Header or Server Variable. To match a pattern under "Conditions" use the "pattern" property.
+* **Action**: Pattern matching under Action Set is only available for Response header "Set-Cookie". To match a pattern for Set-Cookie under an action, use the "HeaderValueMatcher" property. If captured, its value can be used as {capt_header_value_matcher}. As there can be multiple Set-Cookie, a pattern matching here allows you to look for a specific cookie. Example: For a certain version of user-agent, you want to rewrite the set-cookie response header for "cookie2" with max-age=3600 (one hour). In this case, you can use
+ * Condition - Type: Request header, Header name: user-agent, Pattern to match: *2.0
+ * Action - Rewrite type: Response header, Action type: Set, Header name: Set-Cookie, Header Value Matcher: cookie2=(.*), Header value: cookie2={capt_header_value_matcher_1};Max-Age=3600
> [!Note]
-> Use of */* to prefix and suffix the pattern shouldn't be specified in the pattern to match value. For example, (\d)(\d) matches two digits. /(\d)(\d)/ won't match two digits.
+> If you are running an Application Gateway Web Application Firewall (WAF) with Core Rule Set 3.1 or earlier, you may run into issues when using Perl Compatible Regular Expressions (PCRE) while doing lookahead and lookbehind (negative or positive) assertions.
-Once captured, you can reference them in the action set using the following format:
+### Syntax for capturing
+Patterns can also be used to capture a sub-string for later use. Put parentheses around a sub-pattern in the regex definition. The first pair of parentheses stores its substring in 1, the second pair in 2, and so on. You may use as many parentheses as you like; Perl just keeps defining more numbered variables for you to represent these captured strings. You can find some example in this [Perl programming guidance](https://docstore.mik.ua/orelly/perl/prog3/ch05_07.htm).
+* (\d)(\d) # Match two digits, capturing them into groups 1 and 2
+* (\d+) # Match one or more digits, capturing them all into group 1
+* (\d)+ # Match a digit one or more times, capturing the last into group 1
+
+Once captured, you can use them in the Action Set value using the following format:
* For a request header capture, you must use {http_req_headerName_groupNumber}. For example, {http_req_User-Agent_1} or {http_req_User-Agent_2}
-* For a response header capture, you must use {http_resp_headerName_groupNumber}. For example, {http_resp_Location_1} or {http_resp_Location_2}
+* For a response header capture, you must use {http_resp_headerName_groupNumber}. For example, {http_resp_Location_1} or {http_resp_Location_2}. Whereas for a response header Set-Cookie captured through "HeaderValueMatcher" property, you must use {capt_header_value_matcher_groupNumber}. For example, {capt_header_value_matcher_1} or {capt_header_value_matcher_2}.
* For a server variable, you must use {var_serverVariableName_groupNumber}. For example, {var_uri_path_1} or {var_uri_path_2} > [!Note]
-> The case of the condition variable needs to match case of the capture variable. For example, if the condition variable is defined as user-agent, the capture variable must be for user-agent ({http_req_user-agent_2}).
-
-If you want to use the whole value, you shouldn't mention the number. Simply use the format {http_req_headerName}, etc. without the groupNumber.
+> * Use of / to prefix and suffix the pattern should not be specified in the pattern to match value. For example, (\d)(\d) will match two digits. /(\d)(\d)/ won't match two digits.
+> * The case of the condition variable needs to match case of the capture variable. For example, if my condition variable is User-Agent, my capture variable must be for User-Agent (i.e. {http_req_User-Agent_2}). If my condition variable is defined as user-agent, my capture variable must be for user-agent (i.e. {http_req_user-agent_2}).
+> * If you want to use the whole value, you should not mention the number. Simply use the format {http_req_headerName}, etc. without the groupNumber.
## Server variables
Application Gateway supports the following server variables for mutual authentic
| client_certificate_subject| The "subject DN" string of the client certificate for an established SSL connection. | | client_certificate_verification| The result of the client certificate verification: *SUCCESS*, *FAILED:\<reason\>*, or *NONE* if a certificate was not present. |
-## Rewrite configuration
-
-To configure a rewrite rule, you need to create a rewrite rule set and add the rewrite rule configuration in it.
-
-A rewrite rule set contains:
-
-* **Request routing rule association:** The rewrite configuration is associated to the source listener via the routing rule. When you use a basic routing rule, the rewrite configuration is associated with a source listener and is a global header rewrite. When you use a path-based routing rule, the rewrite configuration is defined on the URL path map. In that case, it applies only to the specific path area of a site. You can create multiple rewrite sets and apply each rewrite set to multiple listeners. But you can apply only one rewrite set to a specific listener.
-* **Rewrite Condition**: This configuration is optional. Rewrite conditions evaluate the content of the HTTP(S) requests and responses. The rewrite action occurs if the HTTP(S) request or response matches the rewrite condition. If you associate more than one condition with an action, the action occurs only when all the conditions are met. In other words, the operation is a logical AND operation.
-
-* **Rewrite type**: There are 3 types of rewrites available:
- * Rewriting request headers
- * Rewriting response headers
- * Rewriting URL components
- * **URL path**: The value to which the path is to be rewritten.
- * **URL Query String**: The value to which the query string is to be rewritten.
- * **Reevaluate path map**: Used to determine whether the URL path map is to be reevaluated or not. If kept unchecked, the original URL path is used to match the path-pattern in the URL path map. If set to true, the URL path map is reevaluated to check the match with the rewritten path. Enabling this switch helps in routing the request to a different backend pool post rewrite.
-
-## Rewrite configuration common pitfalls
-
-* Enabling 'Reevaluate path map' isn't allowed for basic request routing rules. This is to prevent infinite evaluation loop for a basic routing rule.
-
-* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which doesn't have 'Reevaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
-
-* Incoming requests would be terminated with a 500 error code in case a loop is created dynamically based on client inputs. The Application Gateway continues to serve other requests without any degradation in such a scenario.
-
-### Using URL rewrite or Host header rewrite with Web Application Firewall (WAF_v2 SKU)
-
-When you configure URL rewrite or host header rewrite, the WAF evaluation happens after the modification to the request header or URL parameters (post-rewrite). And when you remove the URL rewrite or host header rewrite configuration on your Application Gateway, the WAF evaluation is done before the header rewrite (pre-rewrite). This order ensures that WAF rules are applied to the final request that would be received by your backend pool.
-
-For example, say you have the following header rewrite rule for the header `"Accept" : "text/html"` - if the value of header `"Accept"` is equal to `"text/html"`, then rewrite the value to `"image/png"`.
-
-Here, with only header rewrite configured, the WAF evaluation is done on `"Accept" : "text/html"`. But when you configure URL rewrite or host header rewrite, then the WAF evaluation is done on `"Accept" : "image/png"`.
-
-### Common scenarios for header rewrite
+## Common scenarios for header rewrite
#### Remove port information from the X-Forwarded-For header
Application Gateway inserts an X-Forwarded-For header into all requests before i
![A screenshot showing a remove port action.](./media/rewrite-http-headers-url/remove-port.png)
-#### Modify a redirection URL
+### Modify a redirection URL
Modification of a redirect URL can be useful under certain circumstances. For example: clients were originally redirected to a path like "/blog" but now should be sent to "/updates" due to a change in content structure.
Here are the steps for replacing the hostname:
![A screenshot of the modify location header action.](./media/rewrite-http-headers-url/app-service-redirection.png)
-#### Implement security HTTP headers to prevent vulnerabilities
+### Implement security HTTP headers to prevent vulnerabilities
You can fix several security vulnerabilities by implementing necessary headers in the application response. These security headers include X-XSS-Protection, Strict-Transport-Security, and Content-Security-Policy. You can use Application Gateway to set these headers for all responses.
You might want to remove headers that reveal sensitive information from an HTTP
It isn't possible to create a rewrite rule to delete the host header. If you attempt to create a rewrite rule with the action type set to delete and the header set to host, it results in an error.
-#### Check for the presence of a header
+### Check for the presence of a header
You can evaluate an HTTP request or response header for the presence of a header or server variable. This evaluation is useful when you want to perform a header rewrite only when a certain header is present. ![A screenshow showing the check presence of a header action.](./media/rewrite-http-headers-url/check-presence.png)
-### Common scenarios for URL rewrite
+## Common scenarios for URL rewrite
-#### Parameter based path selection
+### Parameter based path selection
To accomplish scenarios where you want to choose the backend pool based on the value of a header, part of the URL, or query string in the request, you can use a combination of URL Rewrite capability and path-based routing.
Thus, the rewrite set allows users to check for a specific parameter and assign
For a use case example using query strings, see [Route traffic using parameter based path selection in portal](parameter-based-path-selection-portal.md). -
-#### Rewrite query string parameters based on the URL
+### Rewrite query string parameters based on the URL
Consider a scenario of a shopping website where the user visible link should be simple and legible, but the backend server needs the query string parameters to show the right content.
In that case, Application Gateway can capture parameters from the URL and add qu
For a step-by-step guide to achieve the scenario described above, see [Rewrite URL with Application Gateway using Azure portal](rewrite-url-portal.md)
-### URL rewrite vs URL redirect
+## Rewrite configuration common pitfalls
+
+* Enabling 'Reevaluate path map' isn't allowed for basic request routing rules. This is to prevent infinite evaluation loop for a basic routing rule.
+
+* There needs to be at least 1 conditional rewrite rule or 1 rewrite rule which doesn't have 'Reevaluate path map' enabled for path-based routing rules to prevent infinite evaluation loop for a path-based routing rule.
+
+* Incoming requests would be terminated with a 500 error code in case a loop is created dynamically based on client inputs. The Application Gateway continues to serve other requests without any degradation in such a scenario.
+
+### Using URL rewrite or Host header rewrite with Web Application Firewall (WAF_v2 SKU)
+
+When you configure URL rewrite or host header rewrite, the WAF evaluation happens after the modification to the request header or URL parameters (post-rewrite). And when you remove the URL rewrite or host header rewrite configuration on your Application Gateway, the WAF evaluation is done before the header rewrite (pre-rewrite). This order ensures that WAF rules are applied to the final request that would be received by your backend pool.
+
+For example, say you have the following header rewrite rule for the header `"Accept" : "text/html"` - if the value of header `"Accept"` is equal to `"text/html"`, then rewrite the value to `"image/png"`.
+
+Here, with only header rewrite configured, the WAF evaluation is done on `"Accept" : "text/html"`. But when you configure URL rewrite or host header rewrite, then the WAF evaluation is done on `"Accept" : "image/png"`.
+
+## URL rewrite vs URL redirect
For a URL rewrite, Application Gateway rewrites the URL before the request is sent to the backend. This won't change what users see in the browser because the changes are hidden from the user.
For a URL redirect, Application Gateway sends a redirect response to the client
## Limitations -- If a response has more than one header with the same name, rewriting the value of one of those headers results in dropping the other headers in the response. This can happen with Set-Cookie header since you can have more than one Set-Cookie header in a response. One such scenario is when you're using an app service with an application gateway and have configured cookie-based session affinity on the application gateway. In this case the response contains two Set-Cookie headers. For example: one used by the app service, `Set-Cookie: ARRAffinity=ba127f1caf6ac822b2347cc18bba0364d699ca1ad44d20e0ec01ea80cda2a735;Path=/;HttpOnly;Domain=sitename.azurewebsites.net` and another for application gateway affinity, `Set-Cookie: ApplicationGatewayAffinity=c1a2bd51lfd396387f96bl9cc3d2c516; Path=/`. Rewriting one of the Set-Cookie headers in this scenario can result in removing the other Set-Cookie header from the response. - Rewrites aren't supported when the application gateway is configured to redirect the requests or to show a custom error page.-- Request header names can contain alphanumeric characters and hyphens. Header names containing other characters are discarded when a request is sent to the backend target.
+- Request header names can contain alphanumeric characters and hyphens. Headers names containing other characters will be discarded when a request is sent to the backend target.
- Response header names can contain any alphanumeric characters and specific symbols as defined in [RFC 7230](https://tools.ietf.org/html/rfc7230#page-27). - Connection and upgrade headers cannot be rewritten - Rewrites aren't supported for 4xx and 5xx responses generated directly from Application Gateway
application-gateway Rewrite Url Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-url-portal.md
Previously updated : 4/05/2021 Last updated : 10/22/2024
In the below example whenever the request URL contains */article*, the URL path
f. Enter a regular expression pattern. In this example, we'll use the pattern `.*article/(.*)/(.*)`
- ( ) is used to capture the substring for later use in composing the expression for rewriting the URL path. For more information, see [here](rewrite-http-headers-url.md#capturing).
+ ( ) is used to capture the substring for later use in composing the expression for rewriting the URL path. For more information, see [Pattern matching and capturing](rewrite-http-headers-url.md#pattern-matching-and-capturing).
g. Select **OK**.
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
To add a secret to the vault, you need to take just a few additional steps. In t
```json {
- "clientId": "00000000-0000-0000-0000-000000000000",
- "clientSecret": "00000000-0000-0000-0000-000000000000",
- "subscriptionId": "00000000-0000-0000-0000-000000000000",
- "tenantId": "00000000-0000-0000-0000-000000000000",
+ "clientId": "00001111-aaaa-2222-bbbb-3333cccc4444",
+ "clientSecret": "aaaaaaaa-0b0b-1c1c-2d2d-333333333333",
+ "subscriptionId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
+ "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee",
"activeDirectoryEndpointUrl": "https://login.microsoftonline.com", "resourceManagerEndpointUrl": "https://management.azure.com/", "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
azure-functions Flex Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md
Keep these other considerations in mind when using Flex Consumption plan during
+ **Diagnostic settings**: Diagnostic settings are not currently supported. + **Certificates**: Loading certificates with the WEBSITE_LOAD_CERTIFICATES app setting is currently not supported. + **Key Vault References**: Key Vault references in app settings do not work when Key Vault is network access restricted, even if the function app has Virtual Network integration. The current workaround is to directly reference the Key Vault in code and read the required secrets.++ **Azure Files file share mount**: [Mounting an Azure Files file share](./scripts/functions-cli-mount-files-storage-linux.md) does not work when the function app has Virtual Network integration. ## Related articles
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
You can also use the [Azure Key Vault solution in Azure Monitor](/azure/key-vaul
#### Vault **[Vaults](/azure/key-vault/general/overview)** provide a multi-tenant, low-cost, easy to deploy, zone-resilient (where available), and highly available key management solution suitable for most common cloud application scenarios. Vaults can store and safeguard [secrets, keys, and certificates](/azure/key-vault/general/about-keys-secrets-certificates). They can be either software-protected (standard tier) or HSM-protected (premium tier). For a comparison between the standard and premium tiers, see the [Azure Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/). Software-protected secrets, keys, and certificates are safeguarded by Azure, using industry-standard algorithms and key lengths. If you require extra assurances, you can choose to safeguard your secrets, keys, and certificates in vaults protected by multi-tenant HSMs. The corresponding HSMs are validated according to the [FIPS 140 standard](/azure/compliance/offerings/offering-fips-140-2), and have an overall Security Level 2 rating, which includes requirements for physical tamper evidence and role-based authentication.
-Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs, and use them to encrypt data at rest for [many Azure services](../security/fundamentals/encryption-models.md#supporting-services). As mentioned previously, you can [import or generate encryption keys](/azure/key-vault/keys/hsm-protected-keys) in HSMs ensuring that keys never leave the HSM boundary to support *bring your own key (BYOK)* scenarios.
+Vaults enable support for [customer-managed keys](../security/fundamentals/encryption-models.md) (CMK) where you can control your own keys in HSMs, and use them to encrypt data at rest for [many Azure services](../security/fundamentals/encryption-models.md#services-supporting-customer-managed-keys-cmks). As mentioned previously, you can [import or generate encryption keys](/azure/key-vault/keys/hsm-protected-keys) in HSMs ensuring that keys never leave the HSM boundary to support *bring your own key (BYOK)* scenarios.
Key Vault can handle requesting and renewing certificates in vaults, including Transport Layer Security (TLS) certificates, enabling you to enroll and automatically renew certificates from supported public Certificate Authorities. Key Vault certificates support provides for the management of your X.509 certificates, which are built on top of keys and provide an automated renewal feature. Certificate owner can [create a certificate](/azure/key-vault/certificates/create-certificate) through Azure Key Vault or by importing an existing certificate. Both self-signed and Certificate Authority generated certificates are supported. Moreover, the Key Vault certificate owner can implement secure storage and management of X.509 certificates without interaction with private keys.
When a managed HSM is created, the requestor also provides a list of data plane
> [!IMPORTANT] > Unlike with key vaults, granting your users management plane access to a managed HSM doesn't grant them any access to data plane to access keys or data plane role assignments managed HSM local RBAC. This isolation is implemented by design to prevent inadvertent expansion of privileges affecting access to keys stored in managed HSMs.
-As mentioned previously, managed HSM supports [importing keys generated](/azure/key-vault/managed-hsm/hsm-protected-keys-byok) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as *bring your own key (BYOK)* scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](/azure/azure-sql/database/transparent-data-encryption-byok-overview), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others. For a more complete list of Azure services that work with Managed HSM, see [Data encryption models](../security/fundamentals/encryption-models.md#supporting-services).
+As mentioned previously, managed HSM supports [importing keys generated](/azure/key-vault/managed-hsm/hsm-protected-keys-byok) in your on-premises HSMs, ensuring the keys never leave the HSM protection boundary, also known as *bring your own key (BYOK)* scenario. Managed HSM supports integration with Azure services such as [Azure Storage](../storage/common/customer-managed-keys-overview.md), [Azure SQL Database](/azure/azure-sql/database/transparent-data-encryption-byok-overview), [Azure Information Protection](/azure/information-protection/byok-price-restrictions), and others. For a more complete list of Azure services that work with Managed HSM, see [Data encryption models](../security/fundamentals/encryption-models.md#services-supporting-customer-managed-keys-cmks).
Managed HSM enables you to use the established Azure Key Vault API and management interfaces. You can use the same application development and deployment patterns for all your applications irrespective of the key management solution: multi-tenant vault or single-tenant managed HSM.
azure-netapp-files Data Plane Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/data-plane-security.md
Previously updated : 09/30/2024 Last updated : 10/25/2024
For more information on data encryption at rest, see [Understand data encryption
The data plane manages the encryption keys used to encrypt and decrypt data. These keys can be either platform-managed or customer-managed: - **Platform-managed keys** are automatically managed by Azure, ensuring secure storage and rotation of keys.-- **Customer-managed keys** are stored in Azure Key Vault, allowing you to manage the lifecycle, usage permissions, and auditing of your encryption keys.
+- [**Customer-managed keys**](configure-customer-managed-keys.md) are stored in Azure Key Vault, allowing you to manage the lifecycle, usage permissions, and auditing of your encryption keys.
+- [**Customer-managed keys with managed Hardware Security Module (HSM)**](configure-customer-managed-keys-hardware.md) is an extension to customer-managed keys for Azure NetApp Files volume encryption feature. This HSM extension allows you to store your encryptions keys in a more secure FIPS 140-2 Level 3 HSM instead of the FIPS 140-2 Level 1 or Level 2 service used by Azure Key Vault (AKV).
-For more information about Azure NetApp Files key management, see [How are encryption keys managed](faq-security.md#how-are-encryption-keys-managed) or [Configure customer-managed keys](configure-customer-managed-keys.md).
+For more information about Azure NetApp Files key management, see [How are encryption keys managed](faq-security.md#how-are-encryption-keys-managed), [Configure customer-managed keys](configure-customer-managed-keys.md), or [customer-managed keys with managed HSM](configure-customer-managed-keys-hardware.md).
## Lightweight directory access protocol (LDAP) encryption
azure-netapp-files Faq Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md
Previously updated : 08/07/2024 Last updated : 10/24/2024 # Security FAQs for Azure NetApp Files
Azure NetApp Files cross-region and cross-zone replication uses TLS 1.2 AES-256
By default key management for Azure NetApp Files is handled by the service, using [platform-managed keys](../security/fundamentals/key-management.md). A unique XTS-AES-256 data encryption key is generated for each volume. An encryption key hierarchy is used to encrypt and protect all volume keys. These encryption keys are never displayed or reported in an unencrypted format. When you delete a volume, Azure NetApp Files immediately deletes the volume's encryption keys.
-Alternatively, [customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md) can be used where keys are stored in [Azure Key Vault](/azure/key-vault/general/basic-concepts). With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys. The feature is generally available (GA) in [supported regions](configure-customer-managed-keys.md#supported-regions).
+Alternatively, [customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md) can be used where keys are stored in [Azure Key Vault](/azure/key-vault/general/basic-concepts). With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys. The feature is generally available (GA) in [supported regions](configure-customer-managed-keys.md#supported-regions). [Azure NetApp Files volume encryption with customer-managed keys with the managed Hardware Security Module](configure-customer-managed-keys-hardware.md) is an extension to this feature, allowing you to store your encryption keys in a more secure FIPS 140-2 Level 3 HSM instead of the FIPS 140-2 Level 1 or Level 2 service used by Azure Key Vault.
Azure NetApp Files supports the ability to move existing volumes using platform-managed keys to customer-managed keys. Once you complete the transition, you cannot revert back to platform-managed keys. For additional information, see [Transition an Azure NetApp Files volume to customer-managed keys](configure-customer-managed-keys.md#transition).
-<!-- Also, customer-managed keys using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access [with the Azure NetApp Files feedback form](https://aka.ms/ANFFeedback). As capacity becomes available, requests will be approved. -->
- ## Can I configure the NFS export policy rules to control access to the Azure NetApp Files service mount target? Yes, you can configure up to five rules in a single NFS export policy.
azure-netapp-files Performance Considerations Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-considerations-cool-access.md
Previously updated : 09/05/2024 Last updated : 10/24/2024 # Performance considerations for Azure NetApp Files storage with cool access
When the default cool access retrieval policy is selected, sequential I/O reads
In a recent test performed using Standard storage with cool access for Azure NetApp Files, the following results were obtained.
+>[!NOTE]
+>All results published are for reference purposes only. Results are not guaranteed as performance in production workloads can vary due to numerous factors.
+ ## 100% sequential reads on hot/cool tier (single job) In the following scenario, a single job on one D32_V5 virtual machine (VM) was used on a 50-TiB Azure NetApp Files volume using the Ultra performance tier. Different block sizes were used to test performance on hot and cool tiers.
This graph shows a side-by-side comparison of cool and hot tier performance with
:::image type="content" source="./media/performance-considerations-cool-access/throughput-graph.png" alt-text="Chart of throughput at varying `iodepths` with one job." lightbox="./media/performance-considerations-cool-access/throughput-graph.png":::
-## 100% sequential reads on hot/cool tier (multiple jobs)
-
-For this scenario, the test was conducted with 16 job using a 256=KB block size on a single D32_V5 VM on a 50-TiB Azure NetApp Files volume using the Ultra performance tier.
-
->[!NOTE]
->The maximum for the Ultra service level is 128 MiB/s per tebibyte of allocated capacity. An Azure NetApp Files regular volume can manage a throughput of up to approximately 5,000 MiB/s.
-
-It's possible to push for more throughput for the hot and cool tiers using a single VM when running multiple jobs. The performance difference between hot and cool tiers is less drastic when running multiple jobs. The following graph displays results for hot and cool tiers when running 16 jobs with 16 threads at a 256-KB block size.
---- Throughput improved by nearly three times for the hot tier.-- Throughput improved by 6.5 times for the cool tier.-- The performance difference for the hot and cool tier decreased from 2.9x to just 1.3x.-
-## Maximum viable job scale for cool tier ΓÇô 100% sequential reads
-
-The cool tier has a limit of how many jobs can be pushed to a single Azure NetApp Files volume before latency starts to spike to levels that are generally unusable for most workloads.
-
-In the case of cool tiering, that limit is around 16 jobs with a queue depth of no more than 15. The following graph shows that latency spikes from approximately 23 milliseconds (ms) with 16 jobs/15 queue depth with slightly less throughput than with a queue depth of 14. Latency spikes as high as about 63 ms when pushing 32 jobs and throughput drops by roughly 14%.
-- ## What causes latency in hot and cool tiers? Latency in the hot tier is a factor of the storage system itself, where system resources are exhausted when more I/O is sent to the service than can be handled at any given time. As a result, operations need to queue until previously sent operations can be complete. Latency in the cool tier is generally seen with the cloud retrieval operations: either requests over the network for I/O to the object store (sequential workloads) or cool block rehydration into the hot tier (random workloads).
-## Mixed workload: sequential and random
-
-A mixed workload contains both random and sequential I/O patterns. In mixed workloads, performance profiles for hot and cool tiers can have drastically different results compared to a purely sequential I/O workload but are very similar to a workload that's 100% random.
-
-The following graph shows the results using 16 jobs on a single VM with a queue depth of one and varying random/sequential ratios.
--
-The impact on performance when mixing workloads can also be observed when looking at the latency as the workload mix changes. The graphs show how latency impact for cool and hot tiers as the workload mix goes from 100% sequential to 100% random. Latency starts to spike for the cool tier at around a 60/40 sequential/random mix (greater than 12 ms), while latency remains the same (under 2 ms) for the hot tier.
--- ## Results summary - When a workload is 100% sequential, the cool tier's throughput decreases by roughly 47% versus the hot tier (3330 MiB/s compared to 1742 MiB/s). - When a workload is 100% random, the cool tierΓÇÖs throughput decreases by roughly 88% versus the hot tier (2,479 MiB/s compared to 280 MiB/s). - The performance drop for hot tier when doing 100% sequential (3,330 MiB/s) and 100% random (2,479 MiB/s) workloads was roughly 25%. The performance drop for the cool tier when doing 100% sequential (1,742 MiB/s) and 100% random (280 MiB/s) workloads was roughly 88%.-- Hot tier throughput maintains about 2,300 MiB/s regardless of the workload mix. - When a workload contains any percentage of random I/O, overall throughput for the cool tier is closer to 100% random than 100% sequential. - Reads from cool tier dropped by about 50% when moving from 100% sequential to an 80/20 sequential/random mix. - Sequential I/O can take advantage of a `readahead` cache in Azure NetApp Files that random I/O doesn't. This benefit to sequential I/O helps reduce the overall performance differences between the hot and cool tiers.
-## General recommendations
-
-To avoid worst-case scenario performance with cool access in Azure NetApp Files, follow these recommendations:
+## Considerations and recommendations
- If your workload frequently changes access patterns in an unpredictable manner, cool access may not be ideal due to the performance differences between hot and cool tiers. - If your workload contains any percentage of random I/O, performance expectations when accessing data on the cool tier should be adjusted accordingly. - Configure the coolness window and cool access retrieval settings to match your workload patterns and to minimize the amount of cool tier retrieval.
+- Performance from cool access can vary depending on the dataset and system load where the application is running. It's recommended to conduct relevant tests with your dataset to understand and account for performance variability from cool access.
## Next steps * [Azure NetApp Files storage with cool access](cool-access-introduction.md)
azure-netapp-files Performance Large Volumes Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-large-volumes-linux.md
na Previously updated : 10/16/2024 Last updated : 10/25/2024 # Azure NetApp Files large volume performance benchmarks for Linux
This article describes the tested performance capabilities of a single [Azure Ne
The Ultra service level was used in these tests.
-* Sequential writes: 100% sequential writes maxed out at 8,500 MiB/second in these benchmarks. (A single large volumeΓÇÖs maximum throughput is capped at 12,800 MiB/second by the service.)
-* Sequential reads: 100% sequential reads maxed out at 10,000 MiB/second in these benchmarks. (At the time of these benchmarks, this limit was the maximum allowed throughput. The limit has increased to 12,800 MiB/second.)
+* Sequential writes: 100% sequential writes maxed out at ~8,500 MiB/second in these benchmarks. (A single large volumeΓÇÖs maximum throughput is capped at 12,800 MiB/second by the service, so more potential throughput is possible.)
+* Sequential reads: 100% sequential reads maxed out at ~12,761 MiB/second in these benchmarks. (A single large volume's throughput is capped at 12,800 MiB/second. This result is near the maximum achievable throughput at this time.)
* Random I/O: The same single large volume delivers over 700,000 operations per second.
Tests observed performance thresholds of a single large volume on scale-out and
### 256-KiB sequential workloads (MiB/s)
-The graph represents a 256 KiB sequential workload and a 1 TiB working set. It shows that a single Azure NetApp Files large volume can handle between approximately 8,518 MiB/s pure sequential writes and 9,970 MiB/s pure sequential reads.
+The graph represents a 256-KiB sequential workload using 12 virtual machines reading and writing to a single large volume using a 1-TiB working set. The graph shows that a single Azure NetApp Files large volume can handle between approximately 8,518 MiB/s pure sequential writes and 12,761 MiB/s pure sequential reads.
:::image type="content" source="./media/performance-large-volumes-linux/256-kib-sequential-reads.png" alt-text="Bar chart of a 256-KiB sequential workload on a large volume." lightbox="./media/performance-large-volumes-linux/256-kib-sequential-reads.png":::
The following graphs compare the advantages of `nconnect` with an NFS-mounted vo
### Linux read throughput
-The following graphs show 256-KiB sequential reads of ~10,000MiB/s withΓÇ»`nconnect`, which is roughly ten times the throughput achieved without `nconnect`.
+The following graphs show 256-KiB sequential reads of approximately 10,000M iB/s withΓÇ»`nconnect`, which is roughly ten times the throughput achieved without `nconnect`.
-Note that 10,000 MiB/s bandwidth is offered by a large volume in the Ultra service level.
+Note that 10,000 MiB/s is roughly the line rate of the 100 Gbps network interface card attached to the E104id_v5.
:::image type="content" source="./media/performance-large-volumes-linux/throughput-comparison-nconnect.png" alt-text="Bar chart comparison of read throughput with and without nconnect." lightbox="./media/performance-large-volumes-linux/throughput-comparison-nconnect.png":::
azure-signalr Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-enable-geo-replication.md
Specifically, if your application typically broadcasts to larger groups (size >1
To ensure effective failover management, it is recommended to set each replica's unit size to handle all traffic. Alternatively, you could enable [autoscaling](signalr-howto-scale-autoscale.md) to manage this. For more performance evaluation, refer to [Performance](signalr-concept-performance.md).+
+## Non-Inherited and Inherited Configurations
+Replicas inherit most configurations from the primary resource; however, some settings must be configured directly on the replicas. Below is the list of those configurations:
+
+1. **SKU**: Each replica has its own SKU name and unit size. The autoscaling rules for replicas must be configured separately based on their individual metrics.
+2. **Shared private endpoints**: While shared private endpoints are automatically replicated to replicas, separate approvals are required on target private link resources. To add or remove shared private endpoints, manage them on the primary resource. **Do not** enable the replica until its shared private endpoint has been approved.
+3. **Log Destination Settings**. If not configured on the replicas, only logs from the primary resource will be transferred.
+4. **Alerts**.
+
+All other configurations are inherited from the primary resource. For example, access keys, identity, application firewall, custom domains, private endpoints, and access control.
azure-web-pubsub Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md
To ensure effective failover management, it is recommended to set each replica's
For more performance evaluation, refer to [Performance](concept-performance.md).
+## Non-Inherited and Inherited Configurations
+Replicas inherit most configurations from the primary resource; however, some settings must be configured directly on the replicas. Below is the list of those configurations:
+
+1. **SKU**: Each replica has its own SKU name and unit size. The autoscaling rules for replicas must be configured separately based on their individual metrics.
+2. **Shared private endpoints**: While shared private endpoints are automatically replicated to replicas, separate approvals are required on target private link resources. To add or remove shared private endpoints, manage them on the primary resource. **Do not** enable the replica until its shared private endpoint has been approved.
+3. **Log Destination Settings**. If not configured on the replicas, only logs from the primary resource will be transferred.
+4. **Alerts**.
+
+All other configurations are inherited from the primary resource. For example, access keys, identity, application firewall, custom domains, private endpoints, and access control.
+
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
The following features are currently available in the Azure Communication Servic
| | Stop continuous DTMF recognition | ✔️ | ✔️ | ✔️ | ✔️ | | | Send DTMF | ✔️ | ✔️ | ✔️ | ✔️ | | | Mute participant | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Start/Stop audio streaming (public preview) | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Start/Stop real-time transcription (public preview)| ✔️ | ✔️ | ✔️ | ✔️ |
| | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ | | | Blind Transfer a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ | | | Blind Transfer a participant from group call to another endpoint| ✔️ | ✔️ | ✔️ | ✔️ |
Your application can perform these actions on calls that are answered or placed
**Cancel media operations** ΓÇô Based on business logic your application might need to cancel ongoing and queued media operations. Depending on the media operation canceled and the ones in queue, your application might receive a webhook event indicating that the action was canceled.
+**Start/Stop audio streaming (public preview)** - Audio streaming allows you to subscribe to real-time audio streams from an ongoing call. For more detailed guidance on how to get started with audio streaming and information about audio streaming callback events, see our [concept](audio-streaming-concept.md) and our [quickstart](../../how-tos/call-automation/audio-streaming-quickstart.md).
+
+**Start/Stop real-time transcription (public preview)** - Real-time transcription allows you to access live transcriptions for the audio of an ongoing call. For more detailed guidance on how to get started with real-time transcription and information about real-time transcription callback events, see our [concept](real-time-transcription.md) and our [quickstart](../../how-tos/call-automation/real-time-transcription-tutorial.md).
++ ### Query scenarios **List participants** ΓÇô Returns a list of all the participants in a call. Recording and transcription bots are omitted from this list.
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Rooms are created and managed via rooms APIs or SDKs. Use the rooms API/SDKs in
| Virtual Rooms SDKs | 2023-06-14 | Generally Available - Fully supported | | Virtual Rooms SDKs | 2023-10-30 | Public Preview - Fully supported | | Virtual Rooms SDKs | 2023-03-31 | Public Preview - retired |
-| Virtual Rooms SDKs | 2022-02-01 | Will be retired on April 30, 2024 |
+| Virtual Rooms SDKs | 2022-02-01 | Public Preview - retired |
| Virtual Rooms SDKs | 2021-04-07 | Public Preview - retired | ## Predefined participant roles and permissions in Virtual Rooms calls
communication-services Control Mid Call Media Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/control-mid-call-media-actions.md
call_connection_client.unhold(target_participant=PhoneNumberIdentifier(TARGET_PH
''' ``` --+
+### Audio streaming (public preview)
+Audio streaming allows you to subscribe to real-time audio streams from an ongoing call. For more detailed guidance on how to get started with audio streaming and information about audio streaming callback events, see [this page](audio-streaming-quickstart.md).
+
+### Real-time transcription (public preview)
+Real-time transcription allows you to access live transcriptions for the audio of an ongoing call. For more detailed guidance on how to get started with real-time transcription and information about real-time transcription callback events, see [this page](real-time-transcription-tutorial.md).
communication-services Reactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/reactions.md
# Reactions
-In this article, you learn how to implement the reactions capability with Azure Communication Services Calling SDKs. This capability allows users in a group call or meeting to send and receive reactions with participants in Azure Communication Services and Microsoft Teams. Reactions for users in Microsoft Teams are controlled by the configuration and policy settings in Teams. Additional information is available in [Manage reactions in Teams meetings and webinars](/microsoftteams/manage-reactions-meetings) and [Meeting options in Microsoft Teams](https://support.microsoft.com/office/meeting-options-in-microsoft-teams-53261366-dbd5-45f9-aae9-a70e6354f88e)
+
+This article describes how to implement reactions for Azure Communication Services Calling SDKs. This capability enables participants in a group call or meeting to send and receive reactions with participants in Azure Communication Services and Microsoft Teams.
+
+The configuration and policy settings in Microsoft Teams control reactions for users in Teams meetings. For more information, see [Manage reactions in Teams meetings and webinars](/microsoftteams/manage-reactions-meetings) and [Meeting options in Microsoft Teams](https://support.microsoft.com/office/meeting-options-in-microsoft-teams-53261366-dbd5-45f9-aae9-a70e6354f88e).
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). - A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).-- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)-
-## Reaction in different call types
-Reactions are supported by Azure Communication SDK in these types of calls:
-
-Reactions are not supported for 1:1 call.
+- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md).
## Limits on reactions
-Reactions are pulling by batches with same interval. Current batch limitation is 20k reactions with pulling time 3 seconds.
-If the number of reactions exceeds the limit - they will be sent in second batch.
+
+The system pulls reactions by batches at regular intervals. Current batch limitation is 20,000 reactions pulled every 3 seconds.
+
+If the number of reactions exceeds the limit, leftover reactions are sent in the next batch.
+
+## Support
+
+The following tables define support for reactions in Azure Communication Services.
+
+Teams meeting support is based on [Teams policy](/microsoftteams/manage-reactions-meetings).
+
+### Identities and call types
+
+The following table shows support for reactions in different call and identity types.
+
+| Identities | Teams interop meeting | Room | 1:1 call | Group call | Teams interop Group Call |
+| | | | | | |
+| Communication Services user | ✔️ | ✔️ | | ✔️ | ✔️ |
+| Microsoft 365 user | ✔️ | | | ✔️ | ✔️ |
+
+### Operations
+
+The following table shows support for reactions in Calling SDK to individual identity types.
+
+| Operations | Communication Services user | Microsoft 365 user |
+| | | |
+| Send specific reactions (like, love, laugh, applause, surprised) | ✔️ | ✔️ |
+| Receive specific reactions (like, love, laugh, applause, surprised) | ✔️ | ✔️ |
+
+### SDKs
+
+The following table shows support for Together Mode feature in individual Azure Communication Services SDKs.
+
+| Platforms | Web | Web UI | iOS | iOS UI | Android | Android UI | Windows |
+| | | | | | | | |
+| Is Supported | ✔️ | ✔️ | | | | | |
+ [!INCLUDE [Reactions JavaScript](./includes/reactions/reactions-web.md)] ## Next steps+ - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md)
communication-services Subscribe To Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/events/subscribe-to-events.md
ms.devlang: azurecli
# Quickstart: Subscribe to Azure Communication Services events
-In this quickstart, you learn how to subscribe to events from Azure Communication Services through the portal, Azure CLI, PowerShell and .NET SDK.
+This article explains how to subscribe to events from Azure Communication Services through the portal, Azure CLI, PowerShell, and .NET SDK.
-You can set up event subscriptions for Communication Services resources through the [Azure portal](https://portal.azure.com) or Azure CLI, PowerShell or with the Azure [Event Grid Management SDK](https://www.nuget.org/packages/Azure.ResourceManager.EventGrid/).
+You can set up event subscriptions for Communication Services resources through the [Azure portal](https://portal.azure.com), Azure CLI, PowerShell, or with the Azure [Event Grid Management SDK](https://www.nuget.org/packages/Azure.ResourceManager.EventGrid/).
-For this Quickstart, we walk through the process of setting up webhook as a subscriber for SMS events from Azure Communication Services. For a full list of events, see this [page](/azure/event-grid/event-schema-communication-services).
+This quickstart describes the process of setting up a webhook as a subscriber for SMS events from Azure Communication Services. For a full list of events, see [Azure Communication Services as an Azure Event Grid source](/azure/event-grid/event-schema-communication-services).
::: zone pivot="platform-azp" [!INCLUDE [Azure portal](./includes/create-event-subscription-azp.md)]
communication-services Diagnostic Options Tag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/voice-video-calling/diagnostic-options-tag.md
+
+ Title: Tutorial on how to attach custom tags to your client telemetry
+
+description: Assign a custom attribute tag to participants telemetry using the calling SDK.
++++ Last updated : 10/24/2024+++++
+# Tutorial on adding custom tags to your client telemetry
+This tutorial shows you how to add a custom data attribute, called the **Diagnostic Option** tag, to the telemetry data that your WebJS client sends to Azure Monitor. This telemetry can be used for post-call analysis.
+
+## Why A/B Testing Matters
+A/B testing is an essential technique for making data-informed decisions in product development. Examining two variations of an application output, developers can identify which version excels based on specific metrics that track call reliability and quality. This method enables companies to test different designs, content, and functionalities within a controlled setting, ensuring that any modifications result in measurable enhancements. Additionally, A/B testing reduces the risks tied to introducing new features or strategies by offering evidence-based insights before a full-scale launch.
+
+Another key benefit of A/B testing is its capacity to reveal user preferences and behaviors that may not be evident through traditional testing techniques. Analyzing the outcomes of these tests allows developers to gain a deeper understanding how two different versions of your application result in end user improvements in calling reliability and quality. This iterative cycle of testing and optimization cultivates a culture of continual enhancement, helping developers remain competitive and adaptable to evolving market trends.
+
+## Benefits of the Diagnostic Option tag
+Consider the possibility that specific segments of your user base are encountering issues, and you aim to better identify and understand these problems. For instance, imagine all your customers utilizing Azure Communication Services WebJS in a single particular location face difficulty. To pinpoint the users experiencing issues, you can incorporate a diagnostic option tag on clients initiating a call in the specified location. This tagging allows you to filter and examine calling logs effectively. By applying targeted tag, you can segregate and analyze this data more efficiently. Monitoring tools such as ACS Calling Insights and Call Diagnostic Center (CDC) can help tracking these tag and identifying recurring issues or patterns. Through ongoing analysis of these tagged sessions, you gain valuable insights into user problems, enabling you to proactively address them and enhance the overall user experience.experience.
+
+## How to add a Diagnostic Option tag to your JavaScript code
+There are three optional fields that you can use to tag give to add various level of. Telemetry tracking for your needs.
+- `appName`
+- `appVersion`
+- `tags`
+
+Each value can have a maximum length of 64 characters, with support for only letters [aA, bB, cC, etc.], numbers[0-9], and basic symbols (dash "-", underscore "_", period ".", colon ":", number sign "#" ).
+
+Here is an example of how to use the **Diagnostic Options** parameters from within your WebJS application:
+```js
+this.callClient = new CallClient({
+ diagnostics: {
+ appName: 'contoso-healthcare-calling-services',
+ appVersion: '2.1',
+ tag: ["contoso_virtual_visits",`#clientTag:participant0001}`]
+ }
+});
+```
+
+## How to view the tag
+Once you add the values to your client SDK, they're populated and appear in your telemetry and metrics as you're calling. These values appear as key-value pairs appended to the user agent field that appears within the [call client log schema](../../concepts/analytics/logs/voice-and-video-logs.md#call-client-operations-log-schema)
+
+**contoso-healthcare-calling-services**/**2.1** azsdk-js-communication-calling/1.27.1-rc.10 (javascript_calling_sdk;**#clientTag:contoso_virtual_visits"**,**`#clientTag:participant0001**). Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36 Edg/129.0.0.0
+
+> [!NOTE]
+> If you doesn't set a value of `appName` and `appVersion` from within the client API, the default value of default/0.0.0 will appear within the `UserAgent` field
+
+## Next steps
+- Learn more about Azure Communication Services Call Diagnostic Center [here](../../concepts//voice-video-calling/call-diagnostics.md)
+- Learn more about Voice and Video calling Insights [here](../../concepts/analytics/insights/voice-and-video-insights.md)
+- Learn more about how to enable Azure Communication Services logs [here](../../concepts/analytics/enable-logging.md)
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
Later in this article, you give permission to the Microsoft Entra app to act by
| Role | Actions allowed | Role definition ID | | | | |
-| EnrollmentReader | Enrollment readers can view data at the enrollment, department, and account scopes. The data contains charges for all of the subscriptions under the scopes, including across tenants. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e |
-| EA purchaser | Purchase reservation orders and view reservation transactions. It has all the permissions of EnrollmentReader, which have all the permissions of DepartmentReader. It can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f |
+| EnrollmentReader | Enrollment readers can view data at the enrollment, department, and account scopes. The data contains charges for all of the subscriptions under the scopes, including across tenants. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | 24f8edb6-1668-4659-b5e2-40bb5f3a7d7e |
+| EA purchaser | Purchase reservation orders and view reservation transactions. It has all the permissions of EnrollmentReader, which have all the permissions of DepartmentReader. It can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | da6647fb-7651-49ee-be91-c43c4877f0c4 |
| DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a |
-| SubscriptionCreator | Create new subscriptions in the given scope of Account. | cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a |
+| SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 |
- An EnrollmentReader role can be assigned to a service principal only by a user who has an enrollment writer role. The EnrollmentReader role assigned to a service principal isn't shown in the Azure portal. It gets created by programmatic means and is only for programmatic use. - A DepartmentReader role can be assigned to a service principal only by a user who has an enrollment writer or department writer role.
A service principal can have only one role.
| | | | `properties.principalId` | It's the value of Object ID. See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). | | `properties.principalTenantId` | See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). |
- | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e` |
+ | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` |
The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the Azure portal.
- Notice that `aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e` is a billing role definition ID for an EnrollmentReader.
+ Notice that `24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` is a billing role definition ID for an EnrollmentReader.
1. Select **Run** to start the command.
Now you can use the service principal to automatically access EA APIs. The servi
For the EA purchaser role, use the same steps for the enrollment reader. Specify the `roleDefinitionId`, using the following example:
-`"/providers/Microsoft.Billing/billingAccounts/1111111/billingRoleDefinitions/ bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f"`
+`"/providers/Microsoft.Billing/billingAccounts/1111111/billingRoleDefinitions/ da6647fb-7651-49ee-be91-c43c4877f0c4"`
## Assign the department reader role to the service principal
Now you can use the service principal to automatically access EA APIs. The servi
| | | | `properties.principalId` | It's the value of Object ID. See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). | | `properties.principalTenantId` | See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). |
- | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/{enrollmentAccountID}/billingRoleDefinitions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a` |
+ | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/{enrollmentAccountID}/billingRoleDefinitions/a0bcee42-bf30-4d1b-926a-48d21664ef71` |
The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the Azure portal.
- The billing role definition ID of `cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a` is for the subscription creator role.
+ The billing role definition ID of `a0bcee42-bf30-4d1b-926a-48d21664ef71` is for the subscription creator role.
1. Select **Run** to start the command.
data-factory Connector Amazon Redshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-redshift.md
Previously updated : 01/05/2024 Last updated : 09/12/2024 # Copy data from Amazon Redshift using Azure Data Factory or Synapse Analytics
For a list of data stores that are supported as sources or sinks by the copy act
Specifically, this Amazon Redshift connector supports retrieving data from Redshift using query or built-in Redshift UNLOAD support.
+The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+ > [!TIP] > To achieve the best performance when copying large amounts of data from Redshift, consider using the built-in Redshift UNLOAD through Amazon S3. See [Use UNLOAD to copy data from Amazon Redshift](#use-unload-to-copy-data-from-amazon-redshift) section for details.
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md
Previously updated : 09/04/2024 Last updated : 09/12/2024 # Copy data from Concur using Azure Data Factory or Synapse Analytics(Preview)
This Concur connector is supported for the following capabilities:
For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
+The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+ > [!NOTE] > Partner account is currently not supported.
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-couchbase.md
For a list of data stores that are supported as sources/sinks, see the [Supporte
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
+The connector supports the Couchbase version higher than 6.0.
+
+The connector now uses the following precision. The previous precision is compatible.
+ - Double values use 17 significant digits (previously 15 significant digits)
+ - Float values use 9 significant digits (previously 7 significant digits)
+ ## Prerequisites [!INCLUDE [data-factory-v2-integration-runtime-requirements](includes/data-factory-v2-integration-runtime-requirements.md)]
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md
For a list of data stores that are supported as sources or sinks by the copy act
The service provides a built-in driver to enable connectivity. Therefore, you don't need to manually install a driver to use this connector.
+The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+
+The connector no longer supports P12 keyfiles. If you rely on service accounts, you are recommended to use JSON keyfiles instead. The P12CustomPwd property used for supporting the P12 keyfile was also deprecated. For more information, see this [article](https://cloud.google.com/sdk/docs/release-notes#bigquery_6).
+ >[!NOTE] >This Google BigQuery connector is built on top of the BigQuery APIs. Be aware that BigQuery limits the maximum rate of incoming requests and enforces appropriate quotas on a per-project basis, refer to [Quotas & Limits - API requests](https://cloud.google.com/bigquery/quotas#api_requests). Make sure you do not trigger too many concurrent requests to the account.
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hive.md
Previously updated : 05/15/2024 Last updated : 09/12/2024
For a list of data stores that are supported as sources/sinks by the copy activi
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
+The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+ ## Prerequisites [!INCLUDE [data-factory-v2-integration-runtime-requirements](includes/data-factory-v2-integration-runtime-requirements.md)]
data-factory Connector Hubspot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hubspot.md
Previously updated : 10/20/2023 Last updated : 09/12/2024 # Copy data from HubSpot using Azure Data Factory or Synapse Analytics
For a list of data stores that are supported as sources/sinks , see the [Support
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
+The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+ ## Getting started [!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mariadb.md
Previously updated : 10/09/2024 Last updated : 10/24/2024
If you use the recommended driver version, the following properties are supporte
| database | Your MariaDB database name. | Yes | | username | Your user name. | Yes | | password | The password for the user name. Mark this field as SecureString to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| sslMode | This option specifies whether the driver uses TLS encryption and verification when connecting to MariaDB. E.g., `SSLMode=<0/1/2/3/4>`.<br/>Options: DISABLED (0) / PREFERRED (1) / REQUIRED (2) / VERIFY_CA (3) / VERIFY_IDENTITY (4) **(Default)** | Yes |
+| useSystemTrustStore | This option specifies whether to use a CA certificate from the system trust store, or from a specified PEM file. E.g. `UseSystemTrustStore=<0/1>`;<br/>Options: Enabled (1) / Disabled (0) **(Default)** | No |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No | **Example:**
If you use the recommended driver version, the following properties are supporte
"type": "SecureString", "value": "<password>" },
- "driverVersion": "v2"
+ "driverVersion": "v2",
+ "sslMode": <sslmode>,
+ "useSystemTrustStore": <UseSystemTrustStore>
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
If you use the recommended driver version, the following properties are supporte
}, "secretName": "<secretName>" },
- "driverVersion": "v2"
+ "driverVersion": "v2",
+ "sslMode": <sslmode>,
+ "useSystemTrustStore": <UseSystemTrustStore>
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-shopify.md
Previously updated : 10/20/2023 Last updated : 09/12/2024 # Copy data from Shopify using Azure Data Factory or Synapse Analytics (Preview)
For a list of data stores that are supported as sources/sinks, see the [Supporte
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
+The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+
+The billing_on column property was removed from the following tables. For more information, see this [article](https://shopify.dev/docs/api/admin-rest/2024-07/resources/usagecharge).
+ - Recurring_Application_Charges
+ - UsageCharge
+ ## Getting started [!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
data-factory Connector Square https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-square.md
Previously updated : 01/05/2024 Last updated : 09/12/2024 # Copy data from Square using Azure Data Factory or Synapse Analytics (Preview)
For a list of data stores that are supported as sources/sinks, see the [Supporte
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
+The connector supports the Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
+ ## Getting started [!INCLUDE [data-factory-v2-connector-get-started](includes/data-factory-v2-connector-get-started.md)]
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-xero.md
Previously updated : 10/20/2023 Last updated : 09/12/2024 # Copy data from Xero using Azure Data Factory or Synapse Analytics
For a list of data stores that are supported as sources/sinks, see the [Supporte
Specifically, this Xero connector supports: -- OAuth 2.0 and OAuth 1.0 authentication. For OAuth 1.0, the connector supports Xero [private application](https://developer.xero.com/documentation/getting-started/getting-started-guide) but not public application.
+- OAuth 2.0 authentication.
- All Xero tables (API endpoints) except "Reports".
+- Windows versions in this [article](create-self-hosted-integration-runtime.md#prerequisites).
## Getting started
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
$runId = ($response.content | ConvertFrom-Json).runId
Here is the sample output: ```json
-{"runId":"ffc9c2a8-d86a-46d5-9208-28b3551007d8"}
+{"runId":"aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e"}
```
Here is the sample output:
```json {
- "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/pipelineruns/ffc9c2a8-d86a-46d5-9208-28b3551007d8",
- "runId": "ffc9c2a8-d86a-46d5-9208-28b3551007d8",
+ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/pipelineruns/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
+ "runId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"debugRunId": null,
- "runGroupId": "ffc9c2a8-d86a-46d5-9208-28b3551007d8",
+ "runGroupId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"pipelineName": "Adfv2QuickStartParamPipeline", "parameters": { "strParamInputFileName": "emp2.txt",
Here is the sample output:
"target": "CopyFromBlobToBlob", "details": "" },
- "activityRunId": "40bab243-9bbf-4538-9336-b797a2f98e2b",
+ "activityRunId": "bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f",
"iterationHash": "", "input": { "source": {
Here is the sample output:
}, "userProperties": {}, "pipelineName": "Adfv2QuickStartParamPipeline",
- "pipelineRunId": "ffc9c2a8-d86a-46d5-9208-28b3551007d8",
+ "pipelineRunId": "aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e",
"status": "Succeeded", "recoveryStatus": "None", "integrationRuntimeNames": [
Here is the sample output:
"@{name=DefaultIntegrationRuntime; type=Managed; location=East US; nodes=}" ] },
- "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/pipelineruns/ffc9c2a8-d86a-46d5-9208-28b3551007d8/activityruns/40bab243-9bbf-4538-9336-b797a2f98e2b"
+ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/pipelineruns/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/activityruns/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f"
} ``` ## Verify the output
data-factory Rest Apis For Airflow Integrated Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/rest-apis-for-airflow-integrated-runtime.md
Sample request:
```rest HTTP
-PUT https://management.azure.com/subscriptions/222f1459-6ebd-4896-82ab-652d5f6883cf/resourcegroups/abnarain-rg/providers/Microsoft.DataFactory/factories/ambika-df/integrationruntimes/sample-2?api-version=2018-06-01
+PUT https://management.azure.com/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/abnarain-rg/providers/Microsoft.DataFactory/factories/ambika-df/integrationruntimes/sample-2?api-version=2018-06-01
``` Sample body:
data-factory Update Machine Learning Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/update-machine-learning-models.md
The following JSON snippet defines an Azure Machine Learning Studio (classic) li
"typeProperties": { "mlEndpoint": "https://ussouthcentral.services.azureml.net/workspaces/00000000eb0abe4d6bbb1d7886062747d7/services/00000000026734a5889e02fbb1f65cefd/jobs?api-version=2.0", "apiKey": "sooooooooooh3WvG1hBfKS2BNNcfwSO7hhY6dY98noLfOdqQydYDIXyf2KoIaN3JpALu/AKtflHWMOCuicm/Q==",
- "updateResourceEndpoint": "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Default-MachineLearning-SouthCentralUS/providers/Microsoft.MachineLearning/webServices/myWebService?api-version=2016-05-01-preview",
+ "updateResourceEndpoint": "https://management.azure.com/subscriptions/ffffffff-eeee-dddd-cccc-bbbbbbbbbbb0/resourceGroups/Default-MachineLearning-SouthCentralUS/providers/Microsoft.MachineLearning/webServices/myWebService?api-version=2016-05-01-preview",
"servicePrincipalId": "fe200044-c008-4008-a005-94000000731", "servicePrincipalKey": "zWa0000000000Tp6FjtZOspK/WMA2tQ08c8U+gZRBlw=", "tenant": "mycompany.com"
data-share Subscribe To Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/subscribe-to-data-share.md
Use these commands to configure where you want to receive data.
```azurecli az role assignment create --role "Contributor" \
- --assignee-object-id 6789abcd-ef01-2345-6789-abcdef012345
+ --assignee-object-id aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
--assignee-principal-type ServicePrincipal --scope "your\storage\account\id\path" ```
Use these commands to configure where you want to receive data.
1. Use the data set ID from the first step, then run the [New-AzDataShareDataSetMapping](/powershell/module/az.datashare/new-azdatasharedatasetmapping) command to create the dataset mapping: ```azurepowershell
- New-AzDataShareDataSetMapping -ResourceGroupName "share-rg" -AccountName "FabrikamDataShareAccount" -ShareSubscriptionName "fabrikamsolutions" -Name "Fabrikam Solutions" -StorageAccountResourceId "6789abcd-ef01-2345-6789-abcdef012345" -DataSetId "0123abcd-ef01-2345-6789-abcdef012345" -Container "StorageContainer"
+ New-AzDataShareDataSetMapping -ResourceGroupName "share-rg" -AccountName "FabrikamDataShareAccount" -ShareSubscriptionName "fabrikamsolutions" -Name "Fabrikam Solutions" -StorageAccountResourceId "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb" -DataSetId "0123abcd-ef01-2345-6789-abcdef012345" -Container "StorageContainer"
``` 1. Run the [Start-AzDataShareSubscriptionSynchronization](/powershell/module/az.datashare/start-azdatasharesubscriptionsynchronization) command to start dataset synchronization.
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-blob-storage.md
Title: Azure Blob Storage as Event Grid source description: Describes the properties that are provided for blob storage events with Azure Event Grid Previously updated : 05/31/2024 Last updated : 10/25/2024 # Azure Blob Storage as an Event Grid source
When an event is triggered, the Event Grid service sends data about that event t
"successCount": 0, "errorList": "" },
+ "tierToColdSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ },
"tierToArchiveSummary": { "totalObjectsCount": 0, "successCount": 0,
When an event is triggered, the Event Grid service sends data about that event t
"successCount": 0, "errorList": "" },
+ "tierToColdSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ },
"tierToArchiveSummary": { "totalObjectsCount": 0, "successCount": 0,
event-grid Handler Event Grid Namespace Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-event-grid-namespace-topic.md
Title: How to send events to Event Grid namespace topics description: This article describes how to deliver events to Event Grid namespace topics.--
- - ignite-2023
- - build-2024
+ Last updated 11/15/2023
+# Customer intent: I want to learn how to send forward events from an Event Grid topic to an Event Grid namespace topic
# How to send events from Event Grid basic to Event Grid namespace topics
-This article describes how to forward events from event subscriptions created in resources like custom topics, system topics, domains, and partner topics to Event Grid namespaces.
+This article describes how to forward events from event subscriptions created in resources like topics, system topics, domains, and partner topics to Event Grid namespaces.
## Overview-
-Namespace topic as a destination in Event Grid basic event subscriptions that helps you to transition to Event Grid namespaces without modifying your existing workflow.
+Event Grid basic supports **Event Grid Namespace topic** as the **endpoint type**. When creating an event subscription to an Event Grid topic, system topic, domain, or partner topic, you can select an Event Grid namespace topic as the endpoint for handling events.
:::image type="content" source="media/handler-event-grid-namespace-topic/namespace-topic-handler-destination.png" alt-text="Image that shows events forwarded from Event Grid basic to Event Grid namespace topic." border="false" lightbox="media/handler-event-grid-namespace-topic/namespace-topic-handler-destination.png":::
-Event Grid namespaces provides new, and interesting capabilities that you might be interested to use in your solutions. If you're currently using Event Grid basic resources like topics, system topics, domains, and partner topics you only need to create a new event subscription in your current topic and select Event Grid namespace topic as a handler destination.
+Namespace topic as a destination in Event Grid basic event subscriptions helps you with transitioning to Event Grid namespaces without modifying your existing workflow. Event Grid namespaces provide new and interesting capabilities that you might be interested to use in your solutions. If you're currently using Event Grid basic resources like topics, system topics, domains, and partner topics you only need to create a new event subscription in your current topic and select Event Grid namespace topic as a handler destination.
-## How to forward events to a new Event Grid namespace
+This article covers an example scenario where you forward Azure Storage events to an Event Grid namespace. Here are the high-level steps:
-Scenario: Subscribe to a storage account system topic and forward storage events to a new Event Grid namespace.
+1. Create a system topic for the Azure storage account and enable managed identity for the system topic.
+1. Assign the system topic's managed identity to the Event Grid Data Sender role on the destination Event Grid namespace.
+1. Create an event subscription to the system topic with the Event Grid namespace as the event handler, and use the managed identity for event delivery.
-### Prerequisites
+## Prerequisites
1. Create an Event Grid namespace resource by following instructions from [Create, view, and manage namespaces](create-view-manage-namespaces.md). 1. Create an Event Grid namespace topic by following instructions from [Create, view, and manage namespace topics](create-view-manage-namespace-topics.md).
-1. Create an Event Grid event subscription in a namespace topic by following instructions from [Create, view, and manage event subscriptions in namespace topics](create-view-manage-event-subscriptions.md).
+1. Create an Event Grid event subscription in a namespace topic by following instructions from [Create, view, and manage event subscriptions in namespace topics](create-view-manage-event-subscriptions.md). This step is optional, but it's useful for testing the scenario.
1. Create an Azure storage account by following instructions from [create a storage account](blob-event-quickstart-portal.md#create-a-storage-account).
-### Create and configure the event subscription
--
-> [!NOTE]
-> For **Event Schema**, select the event schema as **Cloud Events Schema v1.0**. It's the only schema type that the Event Grid Namespace Topic destination supports.
-
-Once the subscription is configured with the basic information, select the **Event Grid Namespace Topic** endpoint type in the endpoint details section and select **Configure an endpoint** to configure the endpoint.
-
-You might want to use this article as a reference to explore how to [subscribe to the blob storage](blob-event-quickstart-portal.md#subscribe-to-the-blob-storage).
-
-Steps to configure the endpoint:
-
-1. On the **Select Event Grid Namespace Topic** page, follow these steps.
- 1. Select the **subscription**.
- 1. Select the **resource group**.
- 1. Select the **Event Grid namespace** resource previously created.
- 1. Select the **Event Grid namespace topic** where you want to forward the events.
- 1. Select **Confirm Selection**.
-
- :::image type="content" source="media/handler-event-grid-namespace-topic/namespace-topic-endpoint-configuration.png" alt-text="Screenshot that shows the Select Event Grid Namespace topic page to configure the endpoint to forward events from Event Grid basic to Event Grid namespace topic." border="false" lightbox="media/handler-event-grid-namespace-topic/namespace-topic-endpoint-configuration.png":::
-1. Now, on the **Create Event Subscription** page, select **Create** to create the event subscription.
-
-## Next steps
+## Create a system topic and enable managed identity for the storage account
+If you have an existing system topic for the storage account, navigate to the system topic page. If you don't have one, create a system topic. Then, enable managed identity for the storage account.
+
+1. Navigate to [Azure portal](https://portal.azure.com).
+1. In the search bar, search for **Event Grid System Topics**, and select it from the search results.
+1. On the **Event Grid System Topics** page, select **+ Create**.
+
+ :::image type="content" source="./media/handler-event-grid-namespace-topic/system-topics-page.png" alt-text="Screenshot that shows the System Topics page with the Create button selected." lightbox="./media/handler-event-grid-namespace-topic/system-topics-page.png":::
+1. On the **Create Event Grid System Topic** page, follow these steps:
+ 1. For **Topic Types**, select **Storage Accounts**.
+ 1. For **Subscription**, select the Azure subscription where you want to create the system topic.
+ 1. For **Resource Group**, select the resource group for the system topic.
+ 1. For **Resource**, select the Azure storage resource for which you want to create the system topic.
+ 1. In the **System Topic Details** section, for **Name**, enter a name for the topic.
+ 1. Select **Review + create** at the bottom of the page.
+
+ :::image type="content" source="./media/handler-event-grid-namespace-topic/create-system-topic-page.png" alt-text="Screenshot that shows the Create Event Grid System Topic page." lightbox="./media/handler-event-grid-namespace-topic/create-system-topic-page.png":::
+1. On the **Review + create** page, review settings, and select **Create**.
+
+ :::image type="content" source="./media/handler-event-grid-namespace-topic/create-system-topic-review-create.png" alt-text="Screenshot that shows the Create Event Grid System Topic - Review and create page." lightbox="./media/handler-event-grid-namespace-topic/create-system-topic-review-create.png":::
+1. After the deployment is successful, select **Go to resource** to navigate to the **Event Grid System Topic** page for the system topic you created.
+
+### Enable managed identity for the system topic
+Now, enable managed identity for the system topic you created. For this example, let's create a system-assigned managed identity for the system topic.
+
+1. On the **Event Grid System Topic** page, select **Identity** under **Settings** on the left navigation menu.
+1. On the **Identity** page, select **On** for **Status**.
+1. Select **Save** on the command bar.
+
+ :::image type="content" source="./media/handler-event-grid-namespace-topic/identity-page.png" alt-text="Screenshot that shows the Identity page for the system topic." lightbox="./media/handler-event-grid-namespace-topic/identity-page.png":::
+1. On the confirmation pop-up window, select **Yes** to confirm the creation of the managed identity.
+1. After the managed identity is created, you see the object (principal) ID for the identity.
+
+ Keep the **System Topic** page open in the current tab of your web browser.
+
+## Grant the identity permission to send events to the namespace
+In the last step, you created a system-assigned managed identity for your storage account's system topic. In this step, you grant the identity the permission to send events to the target or destination namespace.
+
+1. Launch a new tab or a window of the web browser. Navigate to your Event Grid namespace in the Azure portal.
+1. Select **Access control (IAM)** on the left menu.
+1. Select **Add** and then select **Add role assignment**.
+
+ :::image type="content" source="./media/handler-event-grid-namespace-topic/namespace-access-control-add.png" alt-text="Screenshot that shows the Access control page for the Event Grid namespace." lightbox="./media/handler-event-grid-namespace-topic/namespace-access-control-add.png":::
+1. On the **Role** page, search for and select **Event Grid Data Sender** role, and then select **Next**.
+
+ :::image type="content" source="./media/handler-event-grid-namespace-topic/role-page.png" alt-text="Screenshot that shows the Access control page with Event Grid Data Sender role selected." lightbox="./media/handler-event-grid-namespace-topic/role-page.png":::
+1. On the **Members** page, for **Assign access to**, select **Managed identity**, and then choose **+ Select members**.
+
+ :::image type="content" source="./media/handler-event-grid-namespace-topic/members-page.png" alt-text="Screenshot that shows the Members page." lightbox="./media/handler-event-grid-namespace-topic/members-page.png":::
+1. On the **Select managed identities** page, follow these steps:
+ 1. For **Subscription**, select the Azure subscription where the managed identity is created.
+ 1. For **Managed identity**, select **Event Grid System Topic**.
+ 1. For **Select**, type the name of your system topic.
+ 1. In the search results, select the managed identity. The managed identity's name is same as the system topic's name.
+
+ :::image type="content" source="./media/handler-event-grid-namespace-topic/select-identity.png" alt-text="Screenshot that shows the selection of a managed identity." lightbox="./media/handler-event-grid-namespace-topic/select-identity.png":::
+1. On the **Members** page, select **Next**.
+1. On the **Review + assign** page, review settings, and select **Review + assign** at the bottom of the page.
++
+## Create an event subscription to the storage system topic
+Now, you're ready to create an event subscription to the system topic for the source storage account using the namespace as an endpoint.
+
+1. On the **System Topic** page for the system topic, select **Overview** on the left menu if it's not already selected.
+1. Select **+ Event Subscription** on the command bar.
+
+ :::image type="content" source="media/handler-event-grid-namespace-topic/create-event-subscription-button.png" alt-text="Screenshot that shows the Event Grid System Topic page with the Event Subscription button selected." border="false" lightbox="media/handler-event-grid-namespace-topic/create-event-subscription-button.png":::
+1. On the **Create Event Subscription** page, follow these steps:
+ 1. For **Name**, enter the name for an event subscription.
+ 1. For **Event Schema**, select the event schema as **Cloud Events Schema v1.0**. It's the only schema type that the Event Grid Namespace Topic destination supports.
+ 1. For **Filter to Event Types**, select types of events you want to subscribe too.
+ 1. For **Endpoint type**, select **Event Grid Namespace Topic**.
+ 1. Select **Configure an endpoint**.
+
+ :::image type="content" source="media/handler-event-grid-namespace-topic/select-configure-endpoint.png" alt-text="Screenshot that shows the Create Event Subscription page with Configure an endpoint selected." border="false" lightbox="media/handler-event-grid-namespace-topic/select-configure-endpoint.png":::
+1. On the **Select Event Grid Namespace Topic** page, follow these steps:
+ 1. For **Subscription**, select the Azure subscription, resource group, and the namespace that has the namespace topic.
+ 1. For **Event Grid namespace topic**, select the namespace topic.
+ 1. Select **Confirm selection** at the bottom of the page.
+1. Now, on the **Create Event Subscription** page, for **Managed identity type**, select **System assigned**.
+1. Select **Create** at the bottom of the page.
+
+ :::image type="content" source="media/handler-event-grid-namespace-topic/namespace-topic-subscription.png" alt-text="Screenshot that shows how to create a subscription to forward events from Event Grid basic to Event Grid namespace topic." border="false" lightbox="media/handler-event-grid-namespace-topic/namespace-topic-subscription.png":::
+
+ To test the scenario, create a container in the Azure blob storage and upload a file to it. Verify that the event handler or endpoint for your namespace topic receives the blob created event.
+
+ When you upload a blob to a container in the Azure storage, here's what happens:
+
+ 1. Azure Blob Storage sends a **Blob Created** event to your blob storage's system topic.
+ 1. The event is forwarded to your namespace topic as it's the event handler or endpoint for the system topic.
+ 1. The endpoint for the subscription to the namespace topic receives the forwarded event.
+
+
+## Related content
See the following articles: - [Pull delivery overview](pull-delivery-overview.md) - [Push delivery overview](push-delivery-overview.md)-- [Concepts](concepts.md) - Quickstart: [Publish and subscribe to app events using namespace topics](publish-events-using-namespace-topics.md)
firmware-analysis Firmware Analysis Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/firmware-analysis-faq.md
+
+ Title: Frequently asked questions about Firmware analysis
+description: Find answers to some of the common questions about Firmware Analysis. This article includes the file systems that are supported by Firmware Analysis, and links to the Azure CLI and Azure PowerShell commands.
++++ Last updated : 01/10/2024++
+# Frequently asked questions about Firmware analysis
+This article addresses frequent questions about Firmware analysis.
+
+[Firmware analysis](./overview-firmware-analysis.md) is a tool that analyzes firmware images and provides an understanding of security vulnerabilities in the firmware images.
+
+## What types of firmware images does Firmware analysis support?
+Firmware analysis supports unencrypted images that contain file systems with embedded Linux operating systems. Firmware analysis supports the following file system formats:
+
+* Android sparse image
+* bzip2 compressed data
+* CPIO ASCII archive, with CRC
+* CPIO ASCII archive, no CRC
+* CramFS filesystem
+* Flattened device tree blob (DTB)
+* EFI GUID partition table
+* EXT file system
+* POSIX tarball archive (GNU)
+* GPG signed data
+* gzip compressed data
+* ISO-9660 primary volume
+* JFFS2 filesystem, big endian
+* JFFS2 filesystem, little endian
+* LZ4 compressed data
+* LZMA compressed data
+* LZOP compressed file
+* DOS master boot record
+* RomFS filesystem
+* SquashFSv4 file system, little endian
+* POSIX tarball archive
+* UBI erase count header
+* UBI file system superblock node
+* xz compressed data
+* YAFFS filesystem, big endian
+* YAFFS filesystem, little endian
+* ZStandard compressed data
+* Zip archive
+
+## Where are the Firmware analysis Azure CLI/PowerShell docs?
+You can find the documentation for our Azure CLI commands [here](/cli/azure/firmwareanalysis/firmware) and the documentation for our Azure PowerShell commands [here](/powershell/module/az.firmwareanalysis/?#firmwareanalysis).
+
+You can also find the Quickstart for our Azure CLI [here](./quickstart-upload-firmware-using-azure-command-line-interface.md) and the Quickstart for our Azure PowerShell [here](./quickstart-upload-firmware-using-powershell.md). To run a Python script using the SDK to upload and analyze firmware images, visit [Quickstart: Upload firmware using Python](./quickstart-upload-firmware-using-python.md).
firmware-analysis Firmware Analysis Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/firmware-analysis-rbac.md
+
+ Title: Azure Role-Based Access Control for Firmware analysis
+description: Learn about how to use Azure Role-Based Access Control for Firmware Analysis.
+++ Last updated : 01/10/2024++
+# Overview of Azure Role-Based Access Control for Firmware analysis
+As a user of Firmware analysis, you may want to manage access to your firmware image analysis results. Azure Role-Based Access Control (RBAC) is an authorization system that enables you to control who has access to your analysis results, what permissions they have, and at what level of the resource hierarchy. This article explains how to store firmware analysis results in Azure, manage access permissions, and use RBAC to share these results within your organization and with third parties. To learn more about Azure RBAC, visit [What is Azure Role-Based Access Control (Azure RBAC)?](./../role-based-access-control/overview.md).
+
+## Roles
+Roles are a collection of permissions packaged together. There are two types of roles:
+
+* **Job function roles** give users permission to perform specific job functions or tasks, such as **Key Vault Contributor** or **Azure Kubernetes Service Cluster Monitoring User**.
+* **Privileged administrator roles** give elevated access privileges, such as **Owner**, **Contributor**, or **User Access Administrator**. To learn more about roles, visit [Azure built-in roles](./../role-based-access-control/built-in-roles.md).
+
+In Firmware analysis, the most common roles are Owner, Contributor, Security Admin, and Firmware Analysis Admin. Learn more about [which roles you need for different permissions](./firmware-analysis-rbac.md#firmware-analysis-roles-scopes-and-capabilities), such as uploading firmware images or sharing firmware analysis results.
+
+## Understanding the Representation of Firmware Images in the Azure Resource Hierarchy
+Azure organizes resources into resource hierarchies, which are in a top-down structure, and you can assign roles at each level of the hierarchy. The level at which you assign a role is the "scope," and lower scopes may inherit roles assigned at higher scopes. Learn more about the [levels of hierarchy and how to organize your resources in the hierarchy](/azure/cloud-adoption-framework/ready/azure-setup-guide/organize-resources).
+
+When you onboard your subscription to Firmware analysis and select your resource group, the action automatically creates the **default** resource within your resource group.
+
+Navigate to your resource group and select **Show hidden types** to show the **default** resource. The **default** resource has the **Microsoft.IoTFirmwareDefense.workspaces** type.
+
+
+Although the **default** workspace resource isn't something that you'll regularly interact with, each firmware image that you upload will be represented as a resource and stored here.
+
+You can use RBAC at each level of the hierarchy, including at the hidden **default Firmware Analysis Workspace** resource level.
+
+Here's the resource hierarchy of Firmware Analysis:
++
+## Apply Azure RBAC
+
+> [!Note]
+> To begin using Firmware analysis, the user that onboards the subscription onto Firmware analysis ***must be*** an Owner, Contributor, Firmware Analysis Admin, or Security Admin at the subscription level. Follow the tutorial at [Analyze a firmware image with Firmware analysis](./tutorial-analyze-firmware.md#onboard-your-subscription-to-use-firmware-analysis) to onboard your subscription. Once you've onboarded your subscription, a user only needs to be a Firmware Analysis Admin to use Firmware Analysis.
+>
+
+As a user of Firmware analysis, you may need to perform certain actions for your organization, such as uploading firmware images or sharing analysis results.
+
+Actions like these involve Role-Based Access Control (RBAC). To effectively use RBAC for Firmware analysis, you must know what your role assignment is, and at what scope. Knowing this information will inform you about what permissions you have, and thus whether you can complete certain actions. To check your role assignment, refer to [Check access for a user to a single Azure resource - Azure RBAC](./../role-based-access-control/check-access.md). Next, see the following table to check what roles and scopes are necessary for certain actions.
+
+### Common roles in Firmware analysis
+
+This table categorizes each role and provides a brief description of their permissions:
+
+**Role** | **Category** | **Description**
+||
+**Owner** | Privileged administrator role | Grants full access to manage all resources, including the ability to assign roles in Azure RBAC.
+**Contributor** | Privileged administrator role | Grants full access to manage all resources, but doesn't allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
+**Security Admin** | Job function role | Allows the user to upload and analyze firmware images, add/assign security initiatives, and edit the security policy. [Learn more](/azure/defender-for-cloud/permissions).
+**Firmware Analysis Admin** | Job function role | Allows the user to upload and analyze firmware images. The user has no access beyond firmware analysis (can't access other resources in the subscription, create or delete resources, or invite other users).
+
+## Firmware analysis roles, scopes, and capabilities
+
+The following table summarizes what roles you need to perform certain actions. These roles and permissions apply at the Subscription and Resource Group levels, unless otherwise stated.
+
+**Action** | **Role required**
+:|:
+Analyze firmware | Owner, Contributor, Security Admin, or Firmware Analysis Admin
+Invite third party users to see firmware analysis results | Owner
+Invite users to the Subscription | Owner at the **Subscription** level (Owner at the Resource Group level **cannot** invite users to the Subscription)
+
+## Uploading Firmware images
+To upload firmware images:
+
+* Confirm that you have sufficient permission in [Firmware Analysis Roles, Scopes, and Capabilities](#firmware-analysis-roles-scopes-and-capabilities).
+* [Upload a firmware image for analysis](./tutorial-analyze-firmware.md#upload-a-firmware-image-for-analysis).
+
+## Invite third parties to interact with your firmware analysis results
+You might want to invite someone to interact solely with your firmware analysis results, without allowing access to other parts of your organization (like other resource groups within your subscription). To allow this type of access, invite the user as a Firmware Analysis Admin at the Resource Group level.
+
+To invite a third party, follow the [Assign Azure roles to external guest users using the Azure portal](./../role-based-access-control/role-assignments-external-users.md#invite-an-external-user-to-your-directory) tutorial.
+
+* In step 3, navigate to your resource group.
+* In step 7, select the **Firmware Analysis Admin** role.
+
+> [!Note]
+> If you received an email to join an organization, be sure to check your Junk folder for the invitation email if you don't see it in your inbox.
+>
firmware-analysis Overview Firmware Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/overview-firmware-analysis.md
+
+ Title: Firmware analysis overview
+description: Learn how firmware analysis helps device builders and operators to evaluate the security of IoT, OT and network devices.
+ Last updated : 06/15/2023++
+#Customer intent: As a device builder, I want to understand how firmware analysis can help secure my IoT/OT devices and products.
++
+# Firmware analysis
+
+Just like computers have operating systems, IoT devices have firmware, and it's the firmware that runs and controls IoT devices. For IoT device builders, security is a near-universal concern as IoT devices have traditionally lacked basic security measures.
+
+For example, IoT attack vectors typically use easily exploitable--but easily correctable--weaknesses such as hardcoded user accounts, outdated and vulnerable open-source packages, or a manufacturer's private cryptographic signing key.
+
+Use the Firmware analysis service to identify embedded security threats, vulnerabilities, and common weaknesses that may be otherwise undetectable.
+
+> [!NOTE]
+> The **Firmware analysis** page is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## How to be sure your firmware is secure
+
+Firmware analysis can analyze your firmware for common weaknesses and vulnerabilities, and provide insight into your firmware security. This analysis is useful whether you build the firmware in-house or receive firmware from your supply chain.
+
+- **Software bill of materials (SBOM)**: Receive a detailed listing of open-source packages used during the firmware's build process. See the package version and what license governs the use of the open-source package.
+
+- **CVE analysis**: See which firmware components have publicly known security vulnerabilities and exposures.
+
+- **Binary hardening analysis**: Identify binaries that haven't enabled specific security flags during compilation like buffer overflow protection, position independent executables, and more common hardening techniques.
+
+- **SSL certificate analysis**: Reveal expired and revoked TLS/SSL certificates.
+
+- **Public and private key analysis**: Verify that the public and private cryptographic keys discovered in the firmware are necessary and not accidental.
+
+- **Password hash extraction**: Ensure that user account password hashes use secure cryptographic algorithms.
++
+## Next steps
+
+- [Analyze a firmware image](./tutorial-analyze-firmware.md)
+- [Understand Role-Based Access Control for Firmware Images](./firmware-analysis-rbac.md)
+- [Frequently asked questions about Firmware analysis](./firmware-analysis-faq.md)
firmware-analysis Quickstart Upload Firmware Using Azure Command Line Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/quickstart-upload-firmware-using-azure-command-line-interface.md
+
+ Title: "Quickstart: Upload firmware images to Firmware analysis using Azure CLI"
+description: "Learn how to upload firmware images for analysis using the Azure command line interface."
++++ Last updated : 01/29/2024++
+# Quickstart: Upload firmware images to Firmware Analysis using Azure CLI
+
+This article explains how to use the Azure CLI to upload firmware images to Firmware analysis.
+
+[Firmware analysis](./overview-firmware-analysis.md) is a tool that analyzes firmware images and provides an understanding of security vulnerabilities in the firmware images.
+
+## Prerequisites
+
+This quickstart assumes a basic understanding of Firmware analysis. For more information, see [Firmware analysis for device builders](./overview-firmware-analysis.md). For a list of the file systems that are supported, see [Frequently asked Questions about Firmware analysis](./firmware-analysis-faq.md#what-types-of-firmware-images-does-firmware-analysis-support).
+
+### Prepare your environment for the Azure CLI
+
+* [Install](/cli/azure/install-azure-cli) the Azure CLI to run CLI commands locally. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+* Sign in to the Azure CLI by using the [az login](/cli/azure/reference-index?#az-login) command. Follow the steps displayed in your terminal to finish the authentication process. For other sign-in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+
+* When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview).
+ * Install the Firmware analysis extension by running the following command:
+ ```azurecli
+ az extension add --name firmwareanalysis
+ ```
+
+* To find the version and dependent libraries that are installed, run the command [az version](/cli/azure/reference-index?#az-version). To upgrade to the latest version, run the command [az upgrade](/cli/azure/reference-index?#az-upgrade).
+
+* [Onboard](./tutorial-analyze-firmware.md#onboard-your-subscription-to-use-firmware-analysis) your subscription to Firmware analysis.
+
+* Select the appropriate subscription ID where you'd like to upload your firmware images by running the command [az account set](/cli/azure/account?#az-account-set).
+
+## Upload a firmware image to the workspace
+
+1. Create a firmware image to be uploaded. Insert your resource group name, subscription ID, and workspace name into the respective parameters.
+
+ ```azurecli
+ az firmwareanalysis firmware create --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default
+ ```
+
+The output of this command includes a `name` property, which is your firmware ID. **Save this ID for the next command.**
+
+2. Generate a SAS URL, which you'll use in the next step to send your firmware image to Azure Storage. Replace `sampleFirmwareID` with the firmware ID that you saved from the previous step. You can store the SAS URL in a variable for easier access for future commands:
+
+ ```azurecli
+ $sasURL = $(az firmwareanalysis workspace generate-upload-url --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default --firmware-id sampleFirmwareID --query "url")
+ ```
+
+3. Upload your firmware image to Azure Storage. Replace `pathToFile` with the path to your firmware image on your local machine.
+
+ ```azurecli
+ az storage blob upload -f pathToFile --blob-url $sasURL
+ ```
+
+Here's an example workflow of how you could use these commands to create and upload a firmware image. To learn more about using variables in CLI commands, visit [How to use variables in Azure CLI commands](/cli/azure/azure-cli-variables?tabs=bash):
+
+```azurecli
+$filePath='/path/to/image'
+$resourceGroup='myResourceGroup'
+$workspace='default'
+
+$fileName='file1'
+$vendor='vendor1'
+$model='model'
+$version='test'
+
+$FWID=$(az firmwareanalysis firmware create --resource-group $resourceGroup --workspace-name $workspace --file-name $fileName --vendor $vendor --model $model --version $version --query "name")
+
+$URL=$(az firmwareanalysis workspace generate-upload-url --resource-group $resourceGroup --workspace-name $workspace --firmware-id $FWID --query "url")
+
+$OUTPUT=(az storage blob upload -f $filePath --blob-url $URL)
+```
+
+## Retrieve firmware analysis results
+
+To retrieve firmware analysis results, you must make sure that the status of the analysis is "Ready":
+
+```azurecli
+az firmwareanalysis firmware show --firmware-id sampleFirmwareID --resource-group myResourceGroup --workspace-name default
+```
+
+Look for the "status" field to display "Ready", then run the following commands to retrieve your firmware analysis results.
+
+If you would like to automate the process of checking your analysis's status, you can use the [`az resource wait`](/cli/azure/resource?#az-resource-wait) command.
+
+The `az resource wait` command has a `--timeout` parameter, which is the time in seconds that the analysis will end if "status" does not reach "Ready" within the timeout frame. The default timeout is 3600, which is one hour. Large images may take longer to analyze, so you can set the timeout using the `--timeout` parameter according to your needs. Here's an example of how you can use the `az resource wait` command with the `--timeout` parameter to automate checking your analysis's status, assuming that you have already created a firmware and stored the firmware ID in a variable named `$FWID`:
+
+```azurecli
+$ID=$(az firmwareanalysis firmware show --resource-group $resourceGroup --workspace-name $workspace --firmware-id $FWID --query "id")
+
+Write-Host (ΓÇÿSuccessfully created a firmware image with the firmware ID of ΓÇÿ + $FWID + ΓÇÿ, recognized in Azure by this resource ID: ΓÇÿ + $ID + ΓÇÿ.ΓÇÖ)
+
+$WAIT=$(az resource wait --ids $ID --custom "properties.status=='Ready'" --timeout 10800)
+
+$STATUS=$(az resource show --ids $ID --query 'properties.status')
+
+Write-Host ('Firmware analysis completed with status: ' + $STATUS)
+```
+
+Once you've confirmed that your analysis status is "Ready", you can run commands to pull the results.
+
+### SBOM
+
+The following command retrieves the SBOM in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```azurecli
+az firmwareanalysis firmware sbom-component --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default --firmware-id sampleFirmwareID
+```
+
+### Weaknesses
+
+The following command retrieves CVEs found in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```azurecli
+az firmwareanalysis firmware cve --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default --firmware-id sampleFirmwareID
+```
+
+### Binary hardening
+
+The following command retrieves analysis results on binary hardening in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```azurecli
+az firmwareanalysis firmware binary-hardening --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default --firmware-id sampleFirmwareID
+```
+
+### Password hashes
+
+The following command retrieves password hashes in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```azurecli
+az firmwareanalysis firmware password-hash --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default --firmware-id sampleFirmwareID
+```
+
+### Certificates
+
+The following command retrieves vulnerable crypto certificates that were found in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```azurecli
+az firmwareanalysis firmware crypto-certificate --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default --firmware-id sampleFirmwareID
+```
+
+### Keys
+
+The following command retrieves vulnerable crypto keys that were found in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```azurecli
+az firmwareanalysis firmware crypto-key --resource-group myResourceGroup --subscription 123e4567-e89b-12d3-a456-426614174000 --workspace-name default --firmware-id sampleFirmwareID
+```
firmware-analysis Quickstart Upload Firmware Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/quickstart-upload-firmware-using-powershell.md
+
+ Title: "Quickstart: Upload firmware images to Firmware analysis using Azure PowerShell"
+description: "Learn how to upload firmware images for analysis using the Azure PowerShell."
++++ Last updated : 01/29/2024++
+# Quickstart: Upload firmware images to Firmware analysis using Azure PowerShell
+
+This article explains how to use Azure PowerShell to upload firmware images to Firmware analysis.
+
+[Firmware analysis](./overview-firmware-analysis.md) is a tool that analyzes firmware images and provides an understanding of security vulnerabilities in the firmware images.
+
+## Prerequisites
+
+This quickstart assumes a basic understanding of Firmware analysis. For more information, see [Firmware analysis for device builders](./overview-firmware-analysis.md). For a list of the file systems that are supported, see [Frequently asked Questions about Firmware analysis](./firmware-analysis-faq.md#what-types-of-firmware-images-does-firmware-analysis-support).
+
+### Prepare your environment for Azure PowerShell
+
+1. [Install Azure PowerShell](/powershell/azure/install-azure-powershell) or [Use Azure Cloud Shell](/azure/cloud-shell/get-started/classic).
+
+2. Sign in to Azure PowerShell by running the command [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). Skip this step if you're using Cloud Shell.
+
+3. If this is your first use of Firmware analysis's Azure PowerShell, install the extension:
+
+ ```powershell
+ Find-Module -Name Az.FirmwareAnalysis | Install-Module
+ ```
+
+4. [Onboard](tutorial-analyze-firmware.md#onboard-your-subscription-to-use-firmware-analysis) your subscription to Firmware analysis.
+
+5. Run [Set-AzContext](/powershell/module/az.accounts/set-azcontext) to set your subscription to use in the current session. Select the subscription where you would like to upload your firmware images.
+
+## Upload a firmware image to the workspace
+
+1. Create a firmware image to be uploaded. Insert your resource group name, workspace name, and any additional details about your firmware image that you'd like into include in the respective parameters, such as a `Description`, `FileName`, `Vendor`, `Model`, or `Version`.
+
+ ```powershell
+ New-AzFirmwareAnalysisFirmware -ResourceGroupName myResourceGroup -WorkspaceName default -Description 'sample description' -FileName file -Vendor vendor -Model model -Version version
+ ```
+
+The output of this command includes a `Name` property, which is your firmware ID. **Save this ID for the next command.**
+
+2. Generate a SAS URL that you'll use in the next step to send your firmware image to Azure Storage. Replace `sampleFirmwareID` with the firmware ID that you saved from the previous step. You can store the SAS URL in a variable for easier access for future commands:
+
+ ```powershell
+ $sasUrl = New-AzFirmwareAnalysisWorkspaceUploadUrl -FirmwareId sampleFirmwareID -ResourceGroupName myResourceGroup -WorkspaceName default
+ ```
+
+3. Use the following script to upload your firmware image to Azure Storage. Replace '`pathToFile`' with the path to your firmware image on your local machine. Wrap the path in quotation marks.
+
+ ```powershell
+ $uri = [System.Uri] $sasURL.Url
+ $storageAccountName = $uri.DnsSafeHost.Split(".")[0]
+ $container = $uri.LocalPath.Substring(1)
+ $containerName, $blob = $container -split '/', 2
+ $sasToken = $uri.Query
+ $filePath = 'pathToFile'
+ $storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -SasToken $sasToken
+ Set-AzStorageBlobContent -File $filePath -Container $containerName -Context $storageContext -Blob $blob -Force
+ ```
+
+Here's an example workflow from end-to-end of how you could use the Azure PowerShell commands to create and upload a firmware image. Replace the values for the variables set at the beginning to reflect your environment.
+
+```powershell
+$filePath='/path/to/image'
+$resourceGroup='myResourceGroup'
+$workspace='default'
+
+$fileName='file1'
+$vendor='vendor1'
+$model='model'
+$version='test'
+
+$FWID = (New-AzFirmwareAnalysisFirmware -ResourceGroupName $resourceGroup -WorkspaceName $workspace -FileName $fileName -Vendor $vendor -Model $model -Version $version).Name
+
+$sasUrl = New-AzFirmwareAnalysisWorkspaceUploadUrl -FirmwareId $FWID -ResourceGroupName $resourceGroup -WorkspaceName $workspace
+
+$uri = [System.Uri] $sasURL.Url
+$storageAccountName = $uri.DnsSafeHost.Split(".")[0]
+$container = $uri.LocalPath.Substring(1)
+$containerName, $blob = $container -split '/', 2
+$sasToken = $uri.Query
+$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -SasToken $sasToken
+Set-AzStorageBlobContent -File $filePath -Container $containerName -Context $storageContext -Blob $blob -Force
+```
+
+## Retrieve firmware analysis results
+
+To retrieve firmware analysis results, you must make sure that the status of the analysis is "Ready". Replace `sampleFirmwareID` with your firmware ID, `myResourceGroup` with your resource group name, and `default` with your workspace name:
+
+```powershell
+Get-AzFirmwareAnalysisFirmware -FirmwareId sampleFirmwareID -ResourceGroupName myResourceGroup -WorkspaceName default
+```
+
+Look for the "status" field to display "Ready", then run the respective commands to retrieve your firmware analysis results.
+
+If you would like to automate the process of checking your analysis's status, you can use the following script to check the resource status periodically until it reaches "Ready". You can set the `$timeoutInSeconds` variable depending on the size of your image - larger images may take longer to analyze, so adjust this variable according to your needs.
+
+```powershell
+$ID = Get-AzFirmwareAnalysisFirmware -ResourceGroupName $resourceGroup -WorkspaceName default -FirmwareId $FWID | Select-Object -ExpandProperty Id
+
+Write-Host "Successfully created a firmware image, recognized in Azure by this resource id: $ID."
+
+$timeoutInSeconds = 10800
+$startTime = Get-Date
+
+while ($true) {
+ $resource = Get-AzResource -ResourceId $ID
+ $status = $resource.Properties.Status
+
+ if ($status -eq 'ready') {
+ Write-Host "Firmware analysis completed with status: $status"
+ break
+ }
+
+ $elapsedTime = (Get-Date) - $startTime
+ if ($elapsedTime.TotalSeconds -ge $timeoutInSeconds) {
+ Write-Host "Timeout reached. Firmware analysis status: $status"
+ break
+ }
+
+ Start-Sleep -Seconds 10
+}
+```
+
+### SBOM
+
+The following command retrieves the SBOM in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```powershell
+Get-AzFirmwareAnalysisSbomComponent -FirmwareId sampleFirmwareID -ResourceGroupName myResourceGroup -WorkspaceName default
+```
+
+### Weaknesses
+
+The following command retrieves CVEs found in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```powershell
+Get-AzFirmwareAnalysisCve -FirmwareId sampleFirmwareID -ResourceGroupName myResourceGroup -WorkspaceName default
+```
+
+### Binary hardening
+
+The following command retrieves analysis results on binary hardening in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```powershell
+Get-AzFirmwareAnalysisBinaryHardening -FirmwareId sampleFirmwareID -ResourceGroupName myResourceGroup -WorkspaceName default
+```
+
+### Password hashes
+
+The following command retrieves password hashes in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```powershell
+Get-AzFirmwareAnalysisPasswordHash -FirmwareId sampleFirmwareID -ResourceGroupName myResourceGroup -WorkspaceName default
+```
+
+### Certificates
+
+The following command retrieves vulnerable crypto certificates that were found in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```powershell
+Get-AzFirmwareAnalysisCryptoCertificate -FirmwareId sampleFirmwareID -ResourceGroupName myResourceGroup -WorkspaceName default
+```
+
+### Keys
+
+The following command retrieves vulnerable crypto keys that were found in your firmware image. Replace each argument with the appropriate value for your resource group, subscription, workspace name, and firmware ID.
+
+```powershell
+Get-AzFirmwareAnalysisCryptoKey -FirmwareId sampleFirmwareID -ResourceGroupName myResourceGroup -WorkspaceName default
+```
firmware-analysis Quickstart Upload Firmware Using Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/quickstart-upload-firmware-using-python.md
+
+ Title: "Quickstart: Upload firmware images to Firmware analysis using Python"
+description: "Learn how to upload firmware images for analysis using a Python script."
++++ Last updated : 04/10/2024++
+# Quickstart: Upload firmware images to Firmware analysis using Python
+
+This article explains how to use a Python script to upload firmware images to Firmware analysis.
+
+[Firmware analysis](./overview-firmware-analysis.md) is a tool that analyzes firmware images and provides an understanding of security vulnerabilities in the firmware images.
+
+## Prerequisites
+
+This quickstart assumes a basic understanding of Firmware Analysis. For more information, see [Firmware analysis for device builders](./overview-firmware-analysis.md). For a list of the file systems that are supported, see [Frequently asked Questions about Firmware Analysis](./firmware-analysis-faq.md#what-types-of-firmware-images-does-firmware-analysis-support).
+
+### Prepare your environment
+
+1. Python version 3.8+ is required to use this package. Run the command `python --version` to check your Python version.
+2. Make note of your Azure subscription ID, the name of your Resource Group where you'd like to upload your images, your workspace name, and the name of the firmware image that you'd like to upload.
+3. Ensure that your Azure account has the necessary permissions to upload firmware images to Firmware analysis for your Azure subscription. You must be an Owner, Contributor, Security Admin, or Firmware Analysis Admin at the Subscription or Resource Group level to upload firmware images. For more information, visit [Firmware Analysis Roles, Scopes, and Capabilities](./firmware-analysis-rbac.md#firmware-analysis-roles-scopes-and-capabilities).
+4. Ensure that your firmware image is stored in the same directory as the Python script.
+5. Install the packages needed to run this script:
+ ```python
+ pip install azure-mgmt-iotfirmwaredefense
+ pip install azure-identity
+ ```
+6. Log in to your Azure account by running the command [`az login`](/cli/azure/reference-index?#az-login).
+
+## Run the following Python script
+
+Copy the following Python script into a `.py` file and save it to the same directory as your firmware image. Replace the `subscription_id` variable with your Azure subscription ID, `resource_group_name` with the name of your Resource Group where you'd like to upload your firmware image, and `firmware_file` with the name of your firmware image, which is saved in the same directory as the Python script.
+
+```python
+from azure.identity import AzureCliCredential
+from azure.mgmt.iotfirmwaredefense import *
+from azure.mgmt.iotfirmwaredefense.models import *
+from azure.core.exceptions import *
+from azure.storage.blob import BlobClient
+import uuid
+from time import sleep
+from halo import Halo
+from tabulate import tabulate
+
+subscription_id = "subscription-id"
+resource_group_name = "resource-group-name"
+workspace_name = "default"
+firmware_file = "firmware-image-name"
+
+def main():
+ firmware_id = str(uuid.uuid4())
+ fw_client = init_connections(firmware_id)
+ upload_firmware(fw_client, firmware_id)
+ get_results(fw_client, firmware_id)
+
+def init_connections(firmware_id):
+ spinner = Halo(text=f"Creating client for firmware {firmware_id}")
+ cli_credential = AzureCliCredential()
+ client = IoTFirmwareDefenseMgmtClient(cli_credential, subscription_id, 'https://management.azure.com')
+ spinner.succeed()
+ return client
+
+def upload_firmware(fw_client, firmware_id):
+ spinner = Halo(text="Uploading firmware to Azure...", spinner="dots")
+ spinner.start()
+ token = fw_client.workspaces.generate_upload_url(resource_group_name, workspace_name, {"firmware_id": firmware_id})
+ fw_client.firmwares.create(resource_group_name, workspace_name, firmware_id, {"properties": {"file_name": firmware_file, "vendor": "Contoso Ltd.", "model": "Wifi Router", "version": "1.0.1", "status": "Pending"}})
+ bl_client = BlobClient.from_blob_url(token.url)
+ with open(file=firmware_file, mode="rb") as data:
+ bl_client.upload_blob(data=data)
+ spinner.succeed()
+
+def get_results(fw_client, firmware_id):
+ fw = fw_client.firmwares.get(resource_group_name, workspace_name, firmware_id)
+
+ spinner = Halo("Waiting for analysis to finish...", spinner="dots")
+ spinner.start()
+ while fw.properties.status != "Ready":
+ sleep(5)
+ fw = fw_client.firmwares.get(resource_group_name, workspace_name, firmware_id)
+ spinner.succeed()
+
+ print("-"*107)
+
+ summary = fw_client.summaries.get(resource_group_name, workspace_name, firmware_id, summary_name=SummaryName.FIRMWARE)
+ print_summary(summary.properties)
+ print()
+
+ components = fw_client.sbom_components.list_by_firmware(resource_group_name, workspace_name, firmware_id)
+ if components is not None:
+ print_components(components)
+ else:
+ print("No components found")
+
+def print_summary(summary):
+ table = [[summary.extracted_size, summary.file_size, summary.extracted_file_count, summary.component_count, summary.binary_count, summary.analysis_time_seconds, summary.root_file_systems]]
+ header = ["Extracted Size", "File Size", "Extracted Files", "Components", "Binaries", "Analysis Time", "File Systems"]
+ print(tabulate(table, header))
+
+def print_components(components):
+ table = []
+ header = ["Component", "Version", "License", "Paths"]
+ for com in components:
+ table.append([com.properties.component_name, com.properties.version, com.properties.license, com.properties.file_paths])
+ print(tabulate(table, header, maxcolwidths=[None, None, None, 57]))
+
+if __name__ == "__main__":
+ exit(main())
+```
firmware-analysis Tutorial Analyze Firmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firmware-analysis/tutorial-analyze-firmware.md
+
+ Title: Analyze a firmware image with the Firmware analysis service.
+description: Learn to analyze a compiled firmware image using Firmware analysis.
+ Last updated : 06/15/2023++
+#Customer intent: As a device builder, I want to see what vulnerabilities or weaknesses might exist in my firmware image.
++
+# Tutorial: Analyze an IoT/OT firmware image with Firmware analysis
+
+This tutorial describes how to use the **Firmware analysis** page to upload a firmware image for security analysis and view analysis results.
+
+> [!NOTE]
+> The **Firmware analysis** page is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Prerequisites
+
+> [!NOTE]
+> The **Firmware analysis** feature is automatically available if you currently access Defender for IoT using the Security Admin, Contributor, or Owner role. If you only have the Security Reader role or want to use **Firmware analysis** as a standalone feature, then your Admin must give the Firmware Analysis Admin role. For additional information, please see [Firmware analysis Azure RBAC](./firmware-analysis-rbac.md).
+>
+
+* If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* If you have a subscription but don't have a resource group where you could upload your firmware images, [create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal#create-resource-groups).
+* If you already have a subscription and resource group, move on to the next section.
+
+To use the **Firmware analysis** page to analyze your firmware security, your firmware image must have the following prerequisites:
+
+- You must have access to the compiled firmware image.
+
+- Your image must be an unencrypted, Linux-based firmware image.
+
+- Your image must be less than 5 GB in size.
+
+## Onboard your subscription to use Firmware analysis
+> [!NOTE]
+> To onboard a subscription to use Firmware analysis, you must be an Owner, Contributor, Firmware Analysis Admin, or Security Admin at the subscription level. To learn more about roles and their capabilities in Firmware Analysis, visit [Firmware Analysis Roles, Scopes, and Capabilities](./firmware-analysis-rbac.md#firmware-analysis-roles-scopes-and-capabilities).
+>
+
+If this is your first interaction with **Firmware analysis**, then you'll need to onboard your subscription to the service and select a region in which to upload and store your firmware images.
+
+1. Sign into the Azure portal and go to Defender for IoT.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/defender-portal.png" alt-text="Screenshot of the 'Getting started' page." lightbox="media/tutorial-firmware-analysis/defender-portal.png":::
+
+2. Select **Set up a subscription** in the **Get Started** card, or select the **Subscription management** subtab.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/subscription-management.png" alt-text="Screenshot of the 'Subscription management' page." lightbox="media/tutorial-firmware-analysis/subscription-management.png":::
+
+3. Select **Onboard a new subscription**
+
+ :::image type="content" source="media/tutorial-firmware-analysis/onboard-subscription.png" alt-text="Screenshot of the 'Onboard subscription' pane appearing on the right side of the screen." lightbox="media/tutorial-firmware-analysis/onboard-subscription.png":::
+
+4. In the **Onboard subscription** pane, select a subscription from the drop-down list.
+5. Select a resource group from the **Resource group** drop-down or create a new resource group.
+6. Select a region to use for storage in the **Location** drop-down.
+7. Select **Onboard** to onboard your subscription to Firmware analysis.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/completed-onboarding.png" alt-text="Screenshot of the 'Onboard subscription' pane when it's completed." lightbox="media/tutorial-firmware-analysis/completed-onboarding.png":::
+
+## Upload a firmware image for analysis
+
+If you've just onboarded your subscription, are signed into the Azure portal, and already in the Defender for IoT portal, skip to step two.
+
+1. Sign into the Azure portal and go to Defender for IoT.
+
+1. Select **Firmware analysis** > **Firmware inventory** > **Upload**.
+
+1. In the **Upload a firmware image** pane, select **Choose file**. Browse to and select the firmware image file you want to upload.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/upload.png" alt-text="Screenshot that shows clicking the Upload option within Firmware Analysis." lightbox="media/tutorial-firmware-analysis/upload.png":::
+
+1. Select a **Subscription** that you have onboarded onto Defender for IoT Firmware Analysis. Then select a **Resource group** that you would like to upload your firmware image to.
+
+1. Enter the following details:
+
+ - The firmware's vendor
+ - The firmware's model
+ - The firmware's version
+ - An optional description of your firmware
+
+1. Select **Upload** to upload your firmware for analysis.
+
+ Your firmware appears in the grid on the **Firmware inventory** page.
+
+## View firmware analysis results
+
+The analysis time will vary based on the size of the firmware image and the number of files discovered in the image. While the analysis is taking place, the status will say *Extracting* and then *Analyzing*. When the status is *Ready*, you can see the firmware analysis results.
+
+1. Sign into the Azure portal and go to Microsoft Defender for IoT > **Firmware analysis** > **Firmware inventory**.
+
+1. Select the row of the firmware you want to view. The **Firmware overview** pane shows basic data about the firmware on the right.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/firmware-details.png" alt-text="Screenshot that shows clicking the row with the firmware image to see the side panel details." lightbox="media/tutorial-firmware-analysis/firmware-details.png":::
+
+1. Select **View results** to drill down for more details.
+
+ :::image type="content" source="media/tutorial-firmware-analysis/overview.png" alt-text="Screenshot that shows clicking view results button for a detailed analysis of the firmware image." lightbox="media/tutorial-firmware-analysis/overview.png":::
+
+1. The firmware details page shows security analysis results on the following tabs:
+
+ |Name |Description |
+ |||
+ |**Overview** | View an overview of all of the analysis results.|
+ |**Software Components** | View a software bill of materials with the following details: <br><Br> - A list of open source components used to create firmware image <br>- Component version information <br>- Component license <br>- Executable path of the binary |
+ |**Weaknesses** | View a listing of common vulnerabilities and exposures (CVEs). <br><br>Select a specific CVE to view more details. |
+ |**Binary Hardening** | View if executables compiled using recommended security settings: <br><br>- NX <br>- PIE<br>- RELRO<br>- CANARY<br>- STRIPPED<br><br> Select a specific binary to view more details.|
+ |**Password Hashes** | View embedded accounts and their associated password hashes.<br><br>Select a specific user account to view more details.|
+ |**Certificates** | View a list of TLS/SSL certificates found in the firmware.<br><br>Select a specific certificate to view more details.|
+ |**Keys** | View a list of public and private crypto keys in the firmware.<br><br>Select a specific key to view more details.|
+
+ :::image type="content" source="media/tutorial-firmware-analysis/weaknesses.png" alt-text="Screenshot that shows the weaknesses (CVE) analysis of the firmware image." lightbox="media/tutorial-firmware-analysis/weaknesses.png":::
+
+## Delete a firmware image
+
+Delete a firmware image from Firmware analysis when you no longer need it analyzed.
+
+After you delete an image, there's no way to retrieve the image or the associated analysis results. If you need the results, you'll need to upload the firmware image again for analysis.
+
+**To delete a firmware image**:
+
+1. Select the checkbox for the firmware image you want to delete and then select **Delete**.
+
+## Next steps
+
+For more information, see [Firmware analysis for device builders](./overview-firmware-analysis.md).
+
+To use the Azure CLI commands for Firmware analysis, refer to the [Azure CLI Quickstart](./quickstart-upload-firmware-using-azure-command-line-interface.md), and see [Azure PowerShell Quickstart](./quickstart-upload-firmware-using-powershell.md) to use the Azure PowerShell commands. See [Quickstart: Upload firmware using Python](./quickstart-upload-firmware-using-python.md) to run a Python script using the SDK to upload and analyze firmware images.
+
+Visit [FAQs about Firmware analysis](./firmware-analysis-faq.md) for answers to frequent questions.
hdinsight Azure Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/azure-cli-samples.md
export resourceGroupName=RESOURCEGROUPNAME
export clusterType=spark export httpCredential='PASSWORD' export AZURE_STORAGE_ACCOUNT=STORAGEACCOUNTNAME
-export subnet="/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyRG/providers/Microsoft.Network/virtualNetworks/MyVnet/subnets/subnet1"
-export domain="/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyRG/providers/Microsoft.AAD/domainServices/MyDomain.onmicrosoft.com"
-export userAssignedIdentity="/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/MyMsiRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyMSI"
+export subnet="/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyRG/providers/Microsoft.Network/virtualNetworks/MyVnet/subnets/subnet1"
+export domain="/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/MyRG/providers/Microsoft.AAD/domainServices/MyDomain.onmicrosoft.com"
+export userAssignedIdentity="/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyMsiRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyMSI"
export domainAccount=MyAdminAccount@MyDomain.onmicrosoft.com export groupDNS=MyGroup
export AZURE_STORAGE_ACCOUNT=STORAGEACCOUNTNAME
export encryptionKeyName=kafkaClusterKey export encryptionKeyVersion=00000000000000000000000000000000 export encryptionVaultUri=https://MyKeyVault.vault.azure.net
-export userAssignedIdentity="/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/MyMsiRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyMSI"
+export userAssignedIdentity="/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/MyMsiRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyMSI"
az hdinsight create \ --name $clusterName \
hdinsight Create Cluster Error Dictionary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/create-cluster-error-dictionary.md
Make sure that the **VirtualNetworkId** and subnet values are in the correct for
Here's an example of a virtual network ID:
-"/subscriptions/c15fd9b8-e2b8-1d4e-aa85-2e668040233b/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet"
+"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet"
hdinsight Domain Joined Authentication Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/domain-joined-authentication-issues.md
When the authentication fails, you get prompted for credentials. If you cancel t
Sign in fails for federated users with error code 50126 (sign in succeeds for cloud users). Error message is similar to: ```
-Reason: Bad Request, Detailed Response: {"error":"invalid_grant","error_description":"AADSTS70002: Error validating credentials. AADSTS50126: Invalid username or password\r\nTrace ID: 09cc9b95-4354-46b7-91f1-efd92665ae00\r\n Correlation ID: 4209bedf-f195-4486-b486-95a15b70fbe4\r\nTimestamp: 2019-01-28 17:49:58Z","error_codes":[70002,50126], "timestamp":"2019-01-28 17:49:58Z","trace_id":"09cc9b95-4354-46b7-91f1-efd92665ae00","correlation_id":"4209bedf-f195-4486-b486-95a15b70fbe4"}
+Reason: Bad Request, Detailed Response: {"error":"invalid_grant","error_description":"AADSTS70002: Error validating credentials. AADSTS50126: Invalid username or password\r\nTrace ID: 0000aaaa-11bb-cccc-dd22-eeeeee333333\r\n Correlation ID: aaaa0000-bb11-2222-33cc-444444dddddd\r\nTimestamp: 2019-01-28 17:49:58Z","error_codes":[70002,50126], "timestamp":"2019-01-28 17:49:58Z","trace_id":"0000aaaa-11bb-cccc-dd22-eeeeee333333","correlation_id":"aaaa0000-bb11-2222-33cc-444444dddddd"}
``` ### Cause
The Administrator of the Microsoft Entra tenant should enable Microsoft Entra ID
Sign in fails with error code 50034. Error message is similar to: ```
-{"error":"invalid_grant","error_description":"AADSTS50034: The user account Microsoft.AzureAD.Telemetry.Diagnostics.PII doesn't exist in the 0c349e3f-1ac3-4610-8599-9db831cbaf62 directory. To sign into this application, the account must be added to the directory.\r\nTrace ID: bbb819b2-4c6f-4745-854d-0b72006d6800\r\nCorrelation ID: b009c737-ee52-43b2-83fd-706061a72b41\r\nTimestamp: 2019-04-29 15:52:16Z", "error_codes":[50034],"timestamp":"2019-04-29 15:52:16Z","trace_id":"bbb819b2-4c6f-4745-854d-0b72006d6800", "correlation_id":"b009c737-ee52-43b2-83fd-706061a72b41"}
+{"error":"invalid_grant","error_description":"AADSTS50034: The user account Microsoft.AzureAD.Telemetry.Diagnostics.PII doesn't exist in the 0c349e3f-1ac3-4610-8599-9db831cbaf62 directory. To sign into this application, the account must be added to the directory.\r\nTrace ID: 2222cccc-33dd-eeee-ff44-aaaaaa555555\r\nCorrelation ID: cccc2222-dd33-4444-55ee-666666ffffff\r\nTimestamp: 2019-04-29 15:52:16Z", "error_codes":[50034],"timestamp":"2019-04-29 15:52:16Z","trace_id":"2222cccc-33dd-eeee-ff44-aaaaaa555555", "correlation_id":"cccc2222-dd33-4444-55ee-666666ffffff"}
``` ### Cause
Use the same user name that works in that portal.
User account is locked out, error code 50053. Error message is similar to: ```
-{"error":"unauthorized_client","error_description":"AADSTS50053: You've tried to sign in too many times with an incorrect user ID or password.\r\nTrace ID: 844ac5d8-8160-4dee-90ce-6d8c9443d400\r\nCorrelation ID: 23fe8867-0e8f-4e56-8764-0cdc7c61c325\r\nTimestamp: 2019-06-06 09:47:23Z","error_codes":[50053],"timestamp":"2019-06-06 09:47:23Z","trace_id":"844ac5d8-8160-4dee-90ce-6d8c9443d400","correlation_id":"23fe8867-0e8f-4e56-8764-0cdc7c61c325"}
+{"error":"unauthorized_client","error_description":"AADSTS50053: You've tried to sign in too many times with an incorrect user ID or password.\r\nTrace ID: 00aa00aa-bb11-cc22-dd33-44ee44ee44ee\r\nCorrelation ID: 11bb11bb-cc22-dd33-ee44-55ff55ff55ff\r\nTimestamp: 2019-06-06 09:47:23Z","error_codes":[50053],"timestamp":"2019-06-06 09:47:23Z","trace_id":"aaaa0000-bb11-2222-33cc-444444dddddd","correlation_id":"aaaa0000-bb11-2222-33cc-444444dddddd"}
``` ### Cause
Wait for 30 minutes or so, stop any applications that might be trying to authent
Password expired, error code 50053. Error message is similar to: ```
-{"error":"user_password_expired","error_description":"AADSTS50055: Password is expired.\r\nTrace ID: 241a7a47-e59f-42d8-9263-fbb7c1d51e00\r\nCorrelation ID: c7fe4a42-67e4-4acd-9fb6-f4fb6db76d6a\r\nTimestamp: 2019-06-06 17:29:37Z","error_codes":[50055],"timestamp":"2019-06-06 17:29:37Z","trace_id":"241a7a47-e59f-42d8-9263-fbb7c1d51e00","correlation_id":"c7fe4a42-67e4-4acd-9fb6-f4fb6db76d6a","suberror":"user_password_expired","password_change_url":"https://portal.microsoftonline.com/ChangePassword.aspx"}
+{"error":"user_password_expired","error_description":"AADSTS50055: Password is expired.\r\nTrace ID: 6666aaaa-77bb-cccc-dd88-eeeeee999999\r\nCorrelation ID: eeee4444-ff55-6666-77aa-888888bbbbbb\r\nTimestamp: 2019-06-06 17:29:37Z","error_codes":[50055],"timestamp":"2019-06-06 17:29:37Z","trace_id":"6666aaaa-77bb-cccc-dd88-eeeeee999999","correlation_id":"eeee4444-ff55-6666-77aa-888888bbbbbb","suberror":"user_password_expired","password_change_url":"https://portal.microsoftonline.com/ChangePassword.aspx"}
``` ### Cause
hdinsight Identity Broker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/identity-broker.md
To troubleshoot authentication issues, see [this guide](./domain-joined-authenti
In the HDInsight ID Broker set up, custom apps and clients that connect to the gateway can be updated to acquire the required OAuth token first. For more information, see [How to authenticate .NET applications with Azure services](/dotnet/azure/sdk/authentication). The key values required for authorizing access to an HDInsight gateway are: * OAuth resource uri: `https://hib.azurehdinsight.net`
-* AppId: 7865c1d2-f040-46cc-875f-831a1ef6a28a
+* AppId: 00001111-aaaa-2222-bbbb-3333cccc4444
* Permission: (name: Cluster.ReadWrite, id: 8f89faa0-ffef-4007-974d-4989b39ad77d) After you acquire the OAuth token, use it in the authorization header of the HTTP request to the cluster gateway (for example, https://\<clustername\>-int.azurehdinsight.net). A sample curl command to Apache Livy API might look like this example:
hdinsight Hdinsight Troubleshoot Data Lake Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-data-lake-files.md
The certificate provided for Service principal access might have expired.
``` Exception in OAuthTokenController.GetOAuthToken: 'System.InvalidOperationException: Error while getting the OAuth token from AAD for AppPrincipalId 23abe517-2ffd-4124-aa2d-7c224672cae2, ResourceUri https://management.core.windows.net/, AADTenantId https://login.windows.net/80abc8bf-86f1-41af-91ab-2d7cd011db47, ClientCertificateThumbprint C49C25705D60569884EDC91986CEF8A01A495783 > Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException: AADSTS70002: Error validating credentials. AADSTS50012: Client assertion contains an invalid signature. **[Reason - The key used is expired.**, Thumbprint of key used by client: 'C49C25705D60569884EDC91986CEF8A01A495783', Found key 'Start=08/03/2016, End=08/03/2017, Thumbprint=C39C25705D60569884EDC91986CEF8A01A4956D1', Configured keys: [Key0:Start=08/03/2016, End=08/03/2017, Thumbprint=C39C25705D60569884EDC91986CEF8A01A4956D1;]]
- Trace ID: e4d34f1c-a584-47f5-884e-1235026d5000
- Correlation ID: a44d870e-6f23-405a-8b23-9b44aebfa4bb
+ Trace ID: 0000aaaa-11bb-cccc-dd22-eeeeee333333
+ Correlation ID: aaaa0000-bb11-2222-33cc-444444dddddd
Timestamp: 2017-10-06 20:44:56Z > System.Net.WebException: The remote server returned an error: (401) Unauthorized. at System.Net.HttpWebRequest.GetResponse() at Microsoft.IdentityModel.Clients.ActiveDirectory.HttpWebRequestWrapper.<GetResponseSyncOrAsync>d__2.MoveNext()
hdinsight Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/rest-proxy.md
def get_random_string():
#--Configure these properties-# # Tenant ID for your Azure Subscription
-tenant_id = 'ABCDEFGH-1234-1234-1234-ABCDEFGHIJKL'
+tenant_id = 'aaaabbbb-0000-cccc-1111-dddd2222eeee'
# Your Client Application Id
-client_id = 'XYZABCDE-1234-1234-1234-ABCDEFGHIJKL'
+client_id = '00001111-aaaa-2222-bbbb-3333cccc4444'
# Your Client Credentials client_secret = 'password' # kafka rest proxy -endpoint
hdinsight Service Endpoint Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/service-endpoint-policies.md
Use the following process to create the necessary service endpoint policies:
```json "Canada Central":[
- "/subscriptions/235d341f-7fb9-435c-9bdc-034b7306c9b4/resourceGroups/Default-Storage-WestUS",
- "/subscriptions/da0c4c68-9283-4f88-9c35-18f7bd72fbdd/resourceGroups/GenevaWarmPathManageRG",
- "/subscriptions/6a853a41-3423-4167-8d9c-bcf37dc72818/resourceGroups/GenevaWarmPathManageRG",
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/Default-Storage-CanadaCentral",
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/cancstorage",
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG",
- "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
- "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro"
+ "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/Default-Storage-WestUS",
+ "/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/Default-Storage-CanadaCentral",
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/cancstorage",
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/eeee4efe-ff5f-aa6a-bb7b-cccccc8c8c8c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
+ "/subscriptions/eeee4efe-ff5f-aa6a-bb7b-cccccc8c8c8c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro"
], ```
Use the following process to create the necessary service endpoint policies:
# Insert the list of HDInsight owned resources for the region your clusters will be created in. # Be sure to get the most recent list of resource groups from the [list of service endpoint policy resources](https://github.com/Azure-Samples/hdinsight-enterprise-security/blob/main/hdinsight-service-endpoint-policy-resources.json)
- [String[]]$resources = @("/subscriptions/235d341f-7fb9-435c-9bdc-034b7306c9b4/resourceGroups/Default-Storage-WestUS",`
- "/subscriptions/da0c4c68-9283-4f88-9c35-18f7bd72fbdd/resourceGroups/GenevaWarmPathManageRG",`
- "/subscriptions/6a853a41-3423-4167-8d9c-bcf37dc72818/resourceGroups/GenevaWarmPathManageRG",`
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/Default-Storage-CanadaCentral",`
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/cancstorage",`
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG",
- "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
- "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro")
+ [String[]]$resources = @("/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/Default-Storage-WestUS",`
+ "/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/GenevaWarmPathManageRG",`
+ "/subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/GenevaWarmPathManageRG",`
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/Default-Storage-CanadaCentral",`
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/cancstorage",`
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/eeee4efe-ff5f-aa6a-bb7b-cccccc8c8c8c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
+ "/subscriptions/eeee4efe-ff5f-aa6a-bb7b-cccccc8c8c8c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro")
#Assign service resources to the SEP policy. az network service-endpoint policy-definition create -g $rgName --policy-name $sepName -n $sepDefName --service "Microsoft.Storage" --service-resources $resources
Use the following process to create the necessary service endpoint policies:
# Insert the list of HDInsight owned resources for the region your clusters will be created in. # Be sure to get the most recent list of resource groups from the [list of service endpoint policy resources](https://github.com/Azure-Samples/hdinsight-enterprise-security/blob/main/hdinsight-service-endpoint-policy-resources.json)
- [String[]]$resources = @("/subscriptions/235d341f-7fb9-435c-9bdc-034b7306c9b4/resourceGroups/Default-Storage-WestUS",
- "/subscriptions/da0c4c68-9283-4f88-9c35-18f7bd72fbdd/resourceGroups/GenevaWarmPathManageRG",
- "/subscriptions/6a853a41-3423-4167-8d9c-bcf37dc72818/resourceGroups/GenevaWarmPathManageRG",
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/Default-Storage-CanadaCentral",
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/cancstorage",
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG",
- "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
- "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro")
+ [String[]]$resources = @("/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/Default-Storage-WestUS",
+ "/subscriptions/bbbb1b1b-cc2c-dd3d-ee4e-ffffff5f5f5f/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/cccc2c2c-dd3d-ee4e-ff5f-aaaaaa6a6a6a/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/Default-Storage-CanadaCentral",
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/cancstorage",
+ "/subscriptions/dddd3d3d-ee4e-ff5f-aa6a-bbbbbb7b7b7b/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/eeee4efe-ff5f-aa6a-bb7b-cccccc8c8c8c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
+ "/subscriptions/eeee4efe-ff5f-aa6a-bb7b-cccccc8c8c8c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro")
#Declare service endpoint policy definition $sepDef = New-AzServiceEndpointPolicyDefinition -Name "SEPHDICanadaCentral" -Description "Service Endpoint Policy Definition" -Service "Microsoft.Storage" -ServiceResource $resources
internet-peering Walkthrough Device Maintenance Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-device-maintenance-notification.md
Title: Device maintenance notification walkthrough description: Learn how to view current and past peering device maintenance events, and how to create alerts to receive notifications for the future events.- + Previously updated : 06/15/2023-- Last updated : 10/25/2024 # Azure Peering maintenance notification walkthrough
If you're a partner who has Internet Peering or Peering Service resources in Azu
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box at the top of the portal, enter *service health*. Select **Service Health** in the search results.
+1. In the search box at the top of the portal, enter ***service health***. Select **Service Health** in the search results.
:::image type="content" source="./media/walkthrough-device-maintenance-notification/service-health-portal-search.png" alt-text="Screenshot shows how to search for Service Health in the Azure portal." lightbox="./media/walkthrough-device-maintenance-notification/service-health-portal-search.png":::
If you're a partner who has Internet Peering or Peering Service resources in Azu
The summary tab gives you information about the affected resource by a maintenance event, such as the Azure subscription, region, and peering location.
- Once maintenance is completed, a status update is sent. You'll be able to view and review the maintenance event in the **Health history** page after it's completed.
+ Once maintenance is completed, a status update is sent. You'll be able to view and review the maintenance event in the **Health history** page after it's complete.
1. Select **Health history** to see past maintenance events.
Service Health supports forwarding rules, so you can set up your own alerts when
1. To set up a forwarding rule, go to the **Planned maintenance** page, and then select **+ Add service health alert**.
- :::image type="content" source="./media/walkthrough-device-maintenance-notification/add-service-health-alert.png" alt-text="Screenshot shows how to add an alert.":::
+ :::image type="content" source="./media/walkthrough-device-maintenance-notification/add-service-health-alert.png" alt-text="Screenshot shows how to add an alert." lightbox="./media/walkthrough-device-maintenance-notification/add-service-health-alert.png":::
1. In the **Scope** tab, select the Azure subscription your Internet Peering or Peering Service is associated with. When a maintenance event affects a resource, the alert in Service Health is associated with the Azure subscription ID of the resource.
Service Health supports forwarding rules, so you can set up your own alerts when
| Setting | Value | | | | | Services | Select **Azure Peering Service**. |
- | Regions | Select the Azure region(s) of the resources that you want to get notified whenever they have planned maintenance events. |
+ | Regions | Select the Azure regions of the resources that you want to get notified whenever there are planned maintenance events. |
| Event types | Select **Planned maintenance**. | :::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-condition.png" alt-text="Screenshot shows the Condition tab of creating an alert rule in the Azure portal.":::
Service Health supports forwarding rules, so you can set up your own alerts when
:::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-actions.png" alt-text="Screenshot shows the Actions tab before creating a new action group.":::
-1. In the **Basics** tab of **Create action group**, enter or select the following information:
+1. In the **Basics** tab of **Create action group**, enter, or select the following information:
| Setting | Value | | | |
Service Health supports forwarding rules, so you can set up your own alerts when
:::image type="content" source="./media/walkthrough-device-maintenance-notification/create-alert-rule-actions-group.png" alt-text="Screenshot shows the Actions tab after creating a new action group.":::
-1. Select **Test action group** to send test notification(s) to the contact information you previously entered in the action group (to change the contact information, select the pencil icon next to the notification).
+1. Select **Test action group** to send test notifications to the contact information you previously entered in the action group (to change the contact information, select the pencil icon next to the notification).
:::image type="content" source="./media/walkthrough-device-maintenance-notification/edit-action-group.png" alt-text="Screenshot shows how to edit an action group in the Azure portal.":::
Peering partners who haven't onboarded their peerings as Azure resources can't r
:::image type="content" source="./media/walkthrough-device-maintenance-notification/legacy-peering-maintenance-email.png" alt-text="Screenshot shows an example of a legacy peering maintenance email.":::
-## Next steps
+## Next step
-- Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md).
+> [!div class="nextstepaction"]
+> [Prerequisites to set up peering with Microsoft](prerequisites.md)
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support | | - | -- | - | -- | -- |
-| [Debian 12](https://www.debian.org/releases/bookworm/) | | ![Debian + ARM32v7](./media/support/green-check.png) | | [June 2028](https://wiki.debian.org/LTS) |
+| [Debian 12](https://www.debian.org/releases/bookworm/) | ![Debian + AMD64](./media/support/green-check.png) | ![Debian + ARM32v7](./media/support/green-check.png) | ![Debian + ARM64](./media/support/green-check.png) | [June 2028](https://wiki.debian.org/LTS) |
| [Debian 11](https://www.debian.org/releases/bullseye/) | | ![Debian + ARM32v7](./media/support/green-check.png) | | [June 2026](https://wiki.debian.org/LTS) | | [Red Hat Enterprise Linux 9](https://access.redhat.com/articles/3078) | ![Red Hat Enterprise Linux 9 + AMD64](./media/support/green-check.png) | | | [May 2032](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) | | [Red Hat Enterprise Linux 8](https://access.redhat.com/articles/3078) | ![Red Hat Enterprise Linux 8 + AMD64](./media/support/green-check.png) | | | [May 2029](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
The systems listed in the following table are considered compatible with Azure I
| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support | | - | -- | - | -- | -- |
-| [Debian 12](https://www.debian.org/releases/bookworm/) | ![Debian 12 + AMD64](./media/support/green-check.png) | | ![Debian 12 + ARM64](./media/support/green-check.png) | [June 2028](https://wiki.debian.org/LTS) |
| [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) | [June 2026](https://wiki.debian.org/LTS) | | [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) | | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/support/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/support/green-check.png) | |
iot-operations Concept Schema Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/concept-schema-registry.md
resource opcuaSchemaVersion 'Microsoft.DeviceRegistry/schemaRegistries/schemas/s
After you've defined the schema content and resources, you can deploy the Bicep template to create the schema in the schema registry. ```azurecli
-az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name <DEPLOYMENT_NAME> --resource-group
+<RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou
+deleteResources --yes
``` ## Next steps
iot-operations Howto Configure Adlsv2 Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-adlsv2-endpoint.md
Then, assign a role to the managed identity that grants permission to write to t
Finally, create the *DataflowEndpoint* resource and specify the managed identity authentication method. Replace the placeholder values like `<ENDPOINT_NAME>` with your own.
-# [Kubernetes](#tab/kubernetes)
-
-Create a Kubernetes manifest `.yaml` file with the following content.
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: <ENDPOINT_NAME>
- namespace: azure-iot-operations
-spec:
- endpointType: DataLakeStorage
- dataLakeStorageSettings:
- host: https://<ACCOUNT>.blob.core.windows.net
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
-```
-
-Then apply the manifest file to the Kubernetes cluster.
-
-```bash
-kubectl apply -f <FILE>.yaml
-```
- # [Bicep](#tab/bicep) Create a Bicep `.bicep` file with the following content.
resource adlsGen2Endpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2
Then, deploy via Azure CLI. ```azurecli
-az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou deleteResources --yes
``` --
-If you need to override the system-assigned managed identity audience, see the [System-assigned managed identity](#system-assigned-managed-identity) section.
-
-### Use access token authentication
-
-Follow the steps in the [access token](#access-token) section to get a SAS token for the storage account and store it in a Kubernetes secret.
-
-Then, create the *DataflowEndpoint* resource and specify the access token authentication method. Here, replace `<SAS_SECRET_NAME>` with name of the secret containing the SAS token as well as other placeholder values.
- # [Kubernetes](#tab/kubernetes) Create a Kubernetes manifest `.yaml` file with the following content.
spec:
dataLakeStorageSettings: host: https://<ACCOUNT>.blob.core.windows.net authentication:
- method: AccessToken
- accessTokenSettings:
- secretRef: <SAS_SECRET_NAME>
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
``` Then apply the manifest file to the Kubernetes cluster.
Then apply the manifest file to the Kubernetes cluster.
kubectl apply -f <FILE>.yaml ``` ++
+If you need to override the system-assigned managed identity audience, see the [System-assigned managed identity](#system-assigned-managed-identity) section.
+
+### Use access token authentication
+
+Follow the steps in the [access token](#access-token) section to get a SAS token for the storage account and store it in a Kubernetes secret.
+
+Then, create the *DataflowEndpoint* resource and specify the access token authentication method. Here, replace `<SAS_SECRET_NAME>` with name of the secret containing the SAS token as well as other placeholder values.
+ # [Bicep](#tab/bicep) Create a Bicep `.bicep` file with the following content.
resource adlsGen2Endpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2
Then, deploy via Azure CLI. ```azurecli
-az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou deleteResources --yes
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file with the following content.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iot-operations
+spec:
+ endpointType: DataLakeStorage
+ dataLakeStorageSettings:
+ host: https://<ACCOUNT>.blob.core.windows.net
+ authentication:
+ method: AccessToken
+ accessTokenSettings:
+ secretRef: <SAS_SECRET_NAME>
+```
+
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
```
Before creating the dataflow endpoint, assign a role to the managed identity tha
To use system-assigned managed identity, specify the managed identity authentication method in the *DataflowEndpoint* resource. In most cases, you don't need to specify other settings. Not specifying an audience creates a managed identity with the default audience scoped to your storage account.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-dataLakeStorageSettings:
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
-```
- # [Bicep](#tab/bicep) ```bicep
dataLakeStorageSettings: {
} ``` --
-If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
- # [Kubernetes](#tab/kubernetes) ```yaml dataLakeStorageSettings: authentication: method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- audience: https://<ACCOUNT>.blob.core.windows.net
+ systemAssignedManagedIdentitySettings: {}
``` ++
+If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+ # [Bicep](#tab/bicep) ```bicep
dataLakeStorageSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+dataLakeStorageSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: https://<ACCOUNT>.blob.core.windows.net
+```
+ #### Access token
You can also use the IoT Operations portal to create and manage the secret. To l
Finally, create the *DataflowEndpoint* resource with the secret reference.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-dataLakeStorageSettings:
- authentication:
- method: AccessToken
- accessTokenSettings:
- secretRef: <SAS_SECRET_NAME>
-```
- # [Bicep](#tab/bicep) ```bicep
dataLakeStorageSettings: {
} ``` --
-#### User-assigned managed identity
-
-To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
- # [Kubernetes](#tab/kubernetes) ```yaml dataLakeStorageSettings: authentication:
- method: UserAssignedManagedIdentity
- userAssignedManagedIdentitySettings:
- clientId: <ID>
- tenantId: <ID>
+ method: AccessToken
+ accessTokenSettings:
+ secretRef: <SAS_SECRET_NAME>
``` ++
+#### User-assigned managed identity
+
+To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
+ # [Bicep](#tab/bicep) ```bicep
dataLakeStorageSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+dataLakeStorageSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <ID>
+ tenantId: <ID>
+```
+ ## Advanced settings
Use the `batching` settings to configure the maximum number of messages and the
For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-dataLakeStorageSettings:
- batching:
- latencySeconds: 100
- maxMessages: 1000
-```
- # [Bicep](#tab/bicep) ```bicep
dataLakeStorageSettings: {
} ``` -
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+fabricOneLakeSettings:
+ batching:
+ latencySeconds: 100
+ maxMessages: 1000
+```
+++
+## Next steps
+
+- [Create a dataflow](howto-create-dataflow.md)
iot-operations Howto Configure Adx Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-adx-endpoint.md
To send data to Azure Data Explorer in Azure IoT Operations Preview, you can con
Create the dataflow endpoint resource with your cluster and database information. We suggest using the managed identity of the Azure Arc-enabled Kubernetes cluster. This approach is secure and eliminates the need for secret management. Replace the placeholder values like `<ENDPOINT_NAME>` with your own.
-# [Kubernetes](#tab/kubernetes)
-
-Create a Kubernetes manifest `.yaml` file with the following content.
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: <ENDPOINT_NAME>
- namespace: azure-iot-operations
-spec:
- endpointType: DataExplorer
- dataExplorerSettings:
- host: 'https://<CLUSTER>.<region>.kusto.windows.net'
- database: <DATABASE_NAME>
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
-```
-
-Then apply the manifest file to the Kubernetes cluster.
-
-```bash
-kubectl apply -f <FILE>.yaml
-```
- # [Bicep](#tab/bicep) Create a Bicep `.bicep` file with the following content.
resource adxEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-0
Then, deploy via Azure CLI. ```azurecli
-az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou deleteResources --yes
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file with the following content.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iot-operations
+spec:
+ endpointType: DataExplorer
+ dataExplorerSettings:
+ host: 'https://<CLUSTER>.<region>.kusto.windows.net'
+ database: <DATABASE_NAME>
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+```
+
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
```
Before you create the dataflow endpoint, assign a role to the managed identity t
In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. This configuration creates a managed identity with the default audience `https://api.kusto.windows.net`.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-dataExplorerSettings:
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
-```
- # [Bicep](#tab/bicep) ```bicep
dataExplorerSettings: {
} ``` --
-If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
- # [Kubernetes](#tab/kubernetes) ```yaml dataExplorerSettings: authentication: method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- audience: https://<AUDIENCE_URL>
+ systemAssignedManagedIdentitySettings: {}
``` ++
+If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+ # [Bicep](#tab/bicep) ```bicep
dataExplorerSettings: {
} ``` --
-#### User-assigned managed identity
-
-To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
- # [Kubernetes](#tab/kubernetes) ```yaml dataExplorerSettings: authentication:
- method: UserAssignedManagedIdentity
- userAssignedManagedIdentitySettings:
- clientId: <ID>
- tenantId: <ID>
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: https://<AUDIENCE_URL>
``` ++
+#### User-assigned managed identity
+
+To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
+ # [Bicep](#tab/bicep) ```bicep
dataExplorerSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+dataExplorerSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <ID>
+ tenantId: <ID>
+```
+ ## Advanced settings
Use the `batching` settings to configure the maximum number of messages and the
For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-dataExplorerSettings:
- batching:
- latencySeconds: 100
- maxMessages: 1000
-```
- # [Bicep](#tab/bicep) ```bicep
dataExplorerSettings: {
} ``` -
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+dataExplorerSettings:
+ batching:
+ latencySeconds: 100
+ maxMessages: 1000
+```
+++
+## Next steps
+
+- [Create a dataflow](howto-create-dataflow.md)
iot-operations Howto Configure Dataflow Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md
Last updated 09/17/2024
To get started with dataflows, first create dataflow endpoints. A dataflow endpoint is the connection point for the dataflow. You can use an endpoint as a source or destination for the dataflow. Some endpoint types can be used as both sources and destinations, while others are for destinations only. A dataflow needs at least one source endpoint and one destination endpoint.
-## Get started
- To get started, use the following table to choose the endpoint type to configure: | Endpoint type | Description | Can be used as a source | Can be used as a destination |
spec:
Similar to the MQTT example, you can create multiple dataflows that use the same Kafka endpoint for different topics, or the same Data Lake endpoint for different tables.+
+## Next steps
+
+- Create a dataflow endpoint:
+ - [MQTT or Event Grid](howto-configure-mqtt-endpoint.md)
+ - [Kafka or Event Hubs](howto-configure-kafka-endpoint.md)
+ - [Data Lake](howto-configure-adlsv2-endpoint.md)
+ - [Microsoft Fabric OneLake](howto-configure-fabric-endpoint.md)
+ - [Local storage](howto-configure-local-storage-endpoint.md)
iot-operations Howto Configure Dataflow Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-profile.md
spec:
diagnostics: # ... ```+
+## Next steps
+
+- [Create a dataflow](howto-create-dataflow.md)
iot-operations Howto Configure Fabric Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-fabric-endpoint.md
Then, in the Microsoft Fabric workspace you created, select **Manage access** >
Finally, create the *DataflowEndpoint* resource and specify the managed identity authentication method. Replace the placeholder values like `<ENDPOINT_NAME>` with your own.
-# [Kubernetes](#tab/kubernetes)
-
-Create a Kubernetes manifest `.yaml` file with the following content.
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: <ENDPOINT_NAME>
- namespace: azure-iotoperations
-spec:
- endpointType: FabricOneLake
- fabricOneLakeSettings:
- # The default Fabric OneLake host URL in most cases
- host: https://onelake.dfs.fabric.microsoft.com
- oneLakePathType: Tables
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
- names:
- workspaceName: <WORKSPACE_NAME>
- lakehouseName: <LAKEHOUSE_NAME>
-```
-
-Then apply the manifest file to the Kubernetes cluster.
-
-```bash
-kubectl apply -f <FILE>.yaml
-```
- # [Bicep](#tab/bicep) Create a Bicep `.bicep` file with the following content.
resource oneLakeEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@20
Then, deploy via Azure CLI. ```azurecli
-az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou deleteResources --yes
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file with the following content.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iotoperations
+spec:
+ endpointType: FabricOneLake
+ fabricOneLakeSettings:
+ # The default Fabric OneLake host URL in most cases
+ host: https://onelake.dfs.fabric.microsoft.com
+ oneLakePathType: Tables
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+ names:
+ workspaceName: <WORKSPACE_NAME>
+ lakehouseName: <LAKEHOUSE_NAME>
+```
+
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
```
Using the system-assigned managed identity is the recommended authentication met
In the *DataflowEndpoint* resource, specify the managed identity authentication method. In most cases, you don't need to specify other settings. This configuration creates a managed identity with the default audience.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-fabricOneLakeSettings:
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- {}
-```
- # [Bicep](#tab/bicep) ```bicep
fabricOneLakeSettings: {
} ``` --
-If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
- # [Kubernetes](#tab/kubernetes) ```yaml
fabricOneLakeSettings:
authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings:
- audience: https://<ACCOUNT>.onelake.dfs.fabric.microsoft.com
+ {}
``` ++
+If you need to override the system-assigned managed identity audience, you can specify the `audience` setting.
+ # [Bicep](#tab/bicep) ```bicep
fabricOneLakeSettings: {
} ``` --
-#### User-assigned managed identity
- # [Kubernetes](#tab/kubernetes)
-To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
- ```yaml fabricOneLakeSettings: authentication:
- method: UserAssignedManagedIdentity
- userAssignedManagedIdentitySettings:
- clientId: <ID>
- tenantId: <ID>
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: https://<ACCOUNT>.onelake.dfs.fabric.microsoft.com
``` ++
+#### User-assigned managed identity
+ # [Bicep](#tab/bicep) ```bicep
fabricOneLakeSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+To use a user-assigned managed identity, specify the `UserAssignedManagedIdentity` authentication method and provide the `clientId` and `tenantId` of the managed identity.
+
+```yaml
+fabricOneLakeSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <ID>
+ tenantId: <ID>
+```
+ ## Advanced settings
You can set advanced settings for the Fabric OneLake endpoint, such as the batch
The `oneLakePathType` setting determines the type of path to use in the OneLake path. The default value is `Tables`, which is the recommended path type for the most common use cases. The `Tables` path type is a table in the OneLake lakehouse that is used to store the data. It can also be set as `Files`, which is a file in the OneLake lakehouse that is used to store the data. The `Files` path type is useful when you want to store the data in a file format that is not supported by the `Tables` path type.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-fabricOneLakeSettings:
- oneLakePathType: Tables # Or Files
-```
- # [Bicep](#tab/bicep) ```bicep
fabricOneLakeSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+fabricOneLakeSettings:
+ oneLakePathType: Tables # Or Files
+```
+ ### Batching
Use the `batching` settings to configure the maximum number of messages and the
For example, to configure the maximum number of messages to 1000 and the maximum latency to 100 seconds, use the following settings:
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-fabricOneLakeSettings:
- batching:
- latencySeconds: 100
- maxMessages: 1000
-```
- # [Bicep](#tab/bicep) ```bicep
fabricOneLakeSettings: {
} ``` -
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+fabricOneLakeSettings:
+ batching:
+ latencySeconds: 100
+ maxMessages: 1000
+```
+++
+## Next steps
+
+- [Create a dataflow](howto-create-dataflow.md)
iot-operations Howto Configure Kafka Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md
Finally, create the *DataflowEndpoint* resource. Use your own values to replace
1. Select **Apply** to provision the endpoint.
-# [Kubernetes](#tab/kubernetes)
-
-Create a Kubernetes manifest `.yaml` file with the following content.
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: <ENDPOINT_NAME>
- namespace: azure-iot-operations
-spec:
- endpointType: Kafka
- kafkaSettings:
- host: <NAMESPACE>.servicebus.windows.net:9093
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {}
- tls:
- mode: Enabled
-```
-
-Then apply the manifest file to the Kubernetes cluster.
-
-```bash
-kubectl apply -f <FILE>.yaml
-```
- # [Bicep](#tab/bicep) Create a Bicep `.bicep` file with the following content.
resource kafkaEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024
Then, deploy via Azure CLI. ```azurecli
-az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou deleteResources --yes
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file with the following content.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iot-operations
+spec:
+ endpointType: Kafka
+ kafkaSettings:
+ host: <NAMESPACE>.servicebus.windows.net:9093
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings: {}
+ tls:
+ mode: Enabled
+```
+
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
```
Enter the following settings for the endpoint:
| Username reference or token secret | The reference to the username or token secret used for SASL authentication. | | Password reference of token secret | The reference to the password or token secret used for SASL authentication. |
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- authentication:
- method: Sasl
- saslSettings:
- saslType: Plain
- secretRef: <SECRET_NAME>
- tls:
- mode: Enabled
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: Sasl
+ saslSettings:
+ saslType: Plain
+ secretRef: <SECRET_NAME>
+ tls:
+ mode: Enabled
+```
+ Here, the secret reference points to secret that contains the connection string. The secret must be in the same namespace as the Kafka dataflow resource. The secret must have both the username and password as key-value pairs. For example:
To configure a dataflow endpoint for non-Event-Hub Kafka brokers, set the host,
> [!NOTE] > Currently, the operations experience doesn't support using a Kafka dataflow endpoint as a source. You can create a dataflow with a source Kafka dataflow endpoint using Kubernetes or Bicep.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: kafka
- namespace: azure-iot-operations
-spec:
- endpointType: Kafka
- kafkaSettings:
- host: <KAFKA-HOST>:<PORT>
- authentication:
- method: Sasl
- saslSettings:
- saslType: ScramSha256
- secretRef: <SECRET_NAME>
- tls:
- mode: Enabled
- consumerGroupId: mqConnector
-```
- # [Bicep](#tab/bicep) ```bicep
resource kafkaEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: kafka
+ namespace: azure-iot-operations
+spec:
+ endpointType: Kafka
+ kafkaSettings:
+ host: <KAFKA-HOST>:<PORT>
+ authentication:
+ method: Sasl
+ saslSettings:
+ saslType: ScramSha256
+ secretRef: <SECRET_NAME>
+ tls:
+ mode: Enabled
+ consumerGroupId: mqConnector
+```
+ To customize the endpoint settings, see the following sections for more information.
Enter the following settings for the endpoint:
| Username reference or token secret | The reference to the username or token secret used for SASL authentication. | | Password reference of token secret | The reference to the password or token secret used for SASL authentication. |
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- authentication:
- method: Sasl
- saslSettings:
- saslType: Plain # Or ScramSha256, ScramSha512
- secretRef: <SECRET_NAME>
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: Sasl
+ saslSettings:
+ saslType: Plain # Or ScramSha256, ScramSha512
+ secretRef: <SECRET_NAME>
+```
+ The supported SASL types are:
Enter the following settings for the endpoint:
| X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. | | X509 client key | The private key corresponding to the X.509 client certificate. |
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- authentication:
- method: X509Certificate
- x509CertificateSettings:
- secretRef: <SECRET_NAME>
-```
- # [Bicep](#tab/bicep)
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: X509Certificate
+ x509CertificateSettings:
+ secretRef: <SECRET_NAME>
+```
+ The secret must be in the same namespace as the Kafka dataflow resource. Use Kubernetes TLS secret containing the public certificate and private key. For example:
Then, specify the managed identity authentication method in the Kafka settings.
In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **System assigned managed identity**.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- {}
-```
- # [Bicep](#tab/bicep) ```bicep
resource kafkaEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024
} ``` --
-This configuration creates a managed identity with the default audience, which is the same as the Event Hubs namespace host value in the form of `https://<NAMESPACE>.servicebus.windows.net`. However, if you need to override the default audience, you can set the `audience` field to the desired value.
-
-# [Portal](#tab/portal)
-
-Not supported in the operations experience.
- # [Kubernetes](#tab/kubernetes) ```yaml
kafkaSettings:
authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings:
- audience: <YOUR_AUDIENCE_OVERRIDE_VALUE>
+ {}
``` ++
+This configuration creates a managed identity with the default audience, which is the same as the Event Hubs namespace host value in the form of `https://<NAMESPACE>.servicebus.windows.net`. However, if you need to override the default audience, you can set the `audience` field to the desired value.
+
+# [Portal](#tab/portal)
+
+Not supported in the operations experience.
+ # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: <YOUR_AUDIENCE_OVERRIDE_VALUE>
+```
+ #### User-assigned managed identity
In the operations experience dataflow endpoint settings page, select the **Basic
Enter the user assigned managed identity client ID, tenant ID, and scope in the appropriate fields.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- authentication:
- method: UserAssignedManagedIdentity
- userAssignedManagedIdentitySettings:
- clientId: <CLIENT_ID>
- tenantId: <TENANT_ID>
- scope: <SCOPE>
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <CLIENT_ID>
+ tenantId: <TENANT_ID>
+ scope: <SCOPE>
+```
+ #### Anonymous
To use anonymous authentication, update the authentication section of the Kafka
Not yet supported in the operations experience. See [known issues](../troubleshoot/known-issues.md).
+# [Bicep](#tab/bicep)
+
+Not yet supported with Bicep. See [known issues](../troubleshoot/known-issues.md).
+ # [Kubernetes](#tab/kubernetes) ```yaml
kafkaSettings:
{} ```
-# [Bicep](#tab/bicep)
-
-Not yet supported with Bicep. See [known issues](../troubleshoot/known-issues.md).
- ## Advanced settings
In the operations experience, select the **Advanced** tab for the dataflow endpo
:::image type="content" source="media/howto-configure-kafka-endpoint/kafka-advanced.png" alt-text="Screenshot using operations experience to set Kafka dataflow endpoint advanced settings.":::
-# [Kubernetes](#tab/kubernetes)
-
-Under `kafkaSettings`, you can configure additional settings for the Kafka endpoint.
-
-```yaml
-kafkaSettings:
- consumerGroupId: <ID>
- compression: Gzip
- copyMqttProperties: true
- kafkaAcknowledgement: All
- partitionHandlingStrategy: Default
- tls:
- mode: Enabled
- trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
- batching:
- enabled: true
- latencyMs: 1000
- maxMessages: 100
- maxBytes: 1024
-```
- # [Bicep](#tab/bicep) Under `kafkaSettings`, you can configure additional settings for the Kafka endpoint.
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+Under `kafkaSettings`, you can configure additional settings for the Kafka endpoint.
+
+```yaml
+kafkaSettings:
+ consumerGroupId: <ID>
+ compression: Gzip
+ copyMqttProperties: true
+ kafkaAcknowledgement: All
+ partitionHandlingStrategy: Default
+ tls:
+ mode: Enabled
+ trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
+ batching:
+ enabled: true
+ latencyMs: 1000
+ maxMessages: 100
+ maxBytes: 1024
+```
+ ### TLS settings
To enable or disable TLS for the Kafka endpoint, update the `mode` setting in th
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the checkbox next to **TLS mode enabled**.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- tls:
- mode: Enabled # Or Disabled
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ tls:
+ mode: Enabled # Or Disabled
+```
+ The TLS mode can be set to `Enabled` or `Disabled`. If the mode is set to `Enabled`, the dataflow uses a secure connection to the Kafka broker. If the mode is set to `Disabled`, the dataflow uses an insecure connection to the Kafka broker.
Configure the trusted CA certificate for the Kafka endpoint to establish a secur
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Trusted CA certificate config map** field to specify the ConfigMap containing the trusted CA certificate.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- tls:
- trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ tls:
+ trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
+```
+ This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the Kafka dataflow resource. For example:
The consumer group ID is used to identify the consumer group that the dataflow u
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Consumer group ID** field to specify the consumer group ID.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-spec:
- kafkaSettings:
- consumerGroupId: <ID>
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+spec:
+ kafkaSettings:
+ consumerGroupId: <ID>
+```
+ <!-- TODO: check for accuracy -->
To configure compression:
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Compression** field to specify the compression type.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- compression: Gzip # Or Snappy, Lz4
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ compression: Gzip # Or Snappy, Lz4
+```
+ This setting takes effect only if the endpoint is used as a destination where the dataflow is a producer.
To configure batching:
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Batching enabled** field to enable batching. Use the **Batching latency**, **Maximum bytes**, and **Message count** fields to specify the batching settings.
-# [Kubernetes](#tab/kubernetes)
+# [Bicep](#tab/bicep)
-```yaml
-kafkaSettings:
- batching:
+```bicep
+kafkaSettings: {
+ batching: {
enabled: true latencyMs: 1000 maxMessages: 100 maxBytes: 1024
+ }
+}
```
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
-```bicep
-kafkaSettings: {
- batching: {
+```yaml
+kafkaSettings:
+ batching:
enabled: true latencyMs: 1000 maxMessages: 100 maxBytes: 1024
- }
-}
```
To configure the partition handling strategy:
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Partition handling strategy** field to specify the partition handling strategy. Use the **Partition key property** field to specify the property used for partitioning if the strategy is set to `Property`.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- partitionHandlingStrategy: Default # Or Static, Topic, Property
- partitionKeyProperty: <PROPERTY_NAME>
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ partitionHandlingStrategy: Default # Or Static, Topic, Property
+ partitionKeyProperty: <PROPERTY_NAME>
+```
+ ### Kafka acknowledgments
To configure the Kafka acknowledgments:
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Kafka acknowledgement** field to specify the Kafka acknowledgement level.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- kafkaAcknowledgement: All # Or None, One, Zero
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ kafkaAcknowledgement: All # Or None, One, Zero
+```
+ ### Copy MQTT properties
By default, the copy MQTT properties setting is enabled. These user properties i
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use checkbox next to **Copy MQTT properties** field to enable or disable copying MQTT properties.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- copyMqttProperties: Enabled # Or Disabled
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ copyMqttProperties: Enabled # Or Disabled
+```
+ The following sections describe how MQTT properties are translated to Kafka user headers and vice versa when the setting is enabled.
The `CloudEventAttributes` options are `Propagate` or`CreateOrRemap`.
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Cloud event attributes** field to specify the CloudEvents setting.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-kafkaSettings:
- cloudEventAttributes: Propagate # Or CreateOrRemap
-```
- # [Bicep](#tab/bicep) ```bicep
kafkaSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+kafkaSettings:
+ cloudEventAttributes: Propagate # Or CreateOrRemap
+```
+ The following sections describe how CloudEvent properties are propagated or created and remapped.
CloudEvent properties are passed through for messages that contain the required
| `time` | No | `ce-time` | Generated as RFC 3339 in the target client | | `datacontenttype` | No | `ce-datacontenttype` | Changed to the output data content type after the optional transform stage | | `dataschema` | No | `ce-dataschema` | Schema defined in the schema registry |+
+## Next steps
+
+- [Create a dataflow](howto-create-dataflow.md)
iot-operations Howto Configure Local Storage Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-local-storage-endpoint.md
To send data to local storage in Azure IoT Operations Preview, you can configure
Use the local storage option to send data to a locally available persistent volume, through which you can upload data via Azure Container Storage enabled by Azure Arc edge volumes.
-# [Kubernetes](#tab/kubernetes)
-
-Create a Kubernetes manifest `.yaml` file with the following content.
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: <ENDPOINT_NAME>
- namespace: azure-iot-operations
-spec:
- endpointType: localStorage
- localStorageSettings:
- persistentVolumeClaimRef: <PVC_NAME>
-```
-
-Then apply the manifest file to the Kubernetes cluster.
-
-```bash
-kubectl apply -f <FILE>.yaml
-```
- # [Bicep](#tab/bicep) Create a Bicep `.bicep` file with the following content.
resource localStorageDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflo
Then, deploy via Azure CLI. ```azurecli
-az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou deleteResources --yes
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file with the following content.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iot-operations
+spec:
+ endpointType: localStorage
+ localStorageSettings:
+ persistentVolumeClaimRef: <PVC_NAME>
+```
+
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
```
The PersistentVolumeClaim (PVC) must be in the same namespace as the *DataflowEn
## Supported serialization formats
-The only supported serialization format is Parquet.
+The only supported serialization format is Parquet.
+
+## Next steps
+
+- [Create a dataflow](howto-create-dataflow.md)
iot-operations Howto Configure Mqtt Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-mqtt-endpoint.md
MQTT dataflow endpoints are used for MQTT sources and destinations. You can conf
- An instance of [Azure IoT Operations Preview](../deploy-iot-ops/howto-deploy-iot-operations.md) - A [configured dataflow profile](howto-configure-dataflow-profile.md)
-## Azure IoT Operations Local MQTT broker
+## Azure IoT Operations local MQTT broker
### Default endpoint
To view or edit the default MQTT broker endpoint settings:
:::image type="content" source="media/howto-configure-mqtt-endpoint/default-mqtt-endpoint.png" alt-text="Screenshot using operations experience to view the default MQTT dataflow endpoint.":::
-# [Kubernetes](#tab/kubernetes)
-
-You can view the default MQTT broker endpoint settings in the Kubernetes cluster. To view the settings, use the following command:
-
-```bash
-kubectl get dataflowendpoint default -n azure-iot-operations -o yaml
-```
- # [Bicep](#tab/bicep) To edit the default endpoint, create a Bicep `.bicep` file with the following content. Update the settings as needed, and replace the placeholder values like `<AIO_INSTANCE_NAME>` with your own.
resource defaultMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/da
Then, deploy via Azure CLI. ```azurecli
-az stack group create --name MyDeploymentStack --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name MyDeploymentStack --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou deleteResources --yes
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+You can view the default MQTT broker endpoint settings in the Kubernetes cluster. To view the settings, use the following command:
+
+```bash
+kubectl get dataflowendpoint default -n azure-iot-operations -o yaml
```
You can also create new local MQTT broker endpoints with custom settings. For ex
| X509 client key | The private key corresponding to the X.509 client certificate. Required if using *X509 certificate*. | | X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. Required if using *X509 certificate*. |
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: <ENDPOINT_NAME>
- namespace: azure-iot-operations
-spec:
- endpointType: Mqtt
- mqttSettings:
- host: "<HOSTNAME>:<PORT>"
- tls:
- mode: Enabled
- trustedCaCertificateConfigMapRef: <TRUST_BUNDLE>
- authentication:
- method: ServiceAccountToken
- serviceAccountTokenSettings:
- audience: <SA_AUDIENCE>
-```
- # [Bicep](#tab/bicep) ```bicep
resource MqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowE
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iot-operations
+spec:
+ endpointType: Mqtt
+ mqttSettings:
+ host: "<HOSTNAME>:<PORT>"
+ tls:
+ mode: Enabled
+ trustedCaCertificateConfigMapRef: <TRUST_BUNDLE>
+ authentication:
+ method: ServiceAccountToken
+ serviceAccountTokenSettings:
+ audience: <SA_AUDIENCE>
+```
+ ## Azure Event Grid
Once the Event Grid namespace is configured, you can create a dataflow endpoint
1. Select **Apply** to provision the endpoint.
-# [Kubernetes](#tab/kubernetes)
-
-Create a Kubernetes manifest `.yaml` file with the following content.
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: <ENDPOINT_NAME>
- namespace: azure-iot-operations
-spec:
- endpointType: Mqtt
- mqttSettings:
- host: <NAMESPACE>.<REGION>-1.ts.eventgrid.azure.net:8883
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- {}
- tls:
- mode: Enabled
-```
-
-Then apply the manifest file to the Kubernetes cluster.
-
-```bash
-kubectl apply -f <FILE>.yaml
-```
- # [Bicep](#tab/bicep) Create a Bicep `.bicep` file with the following content.
resource remoteMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dat
Then, deploy via Azure CLI. ```azurecli
-az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep
+az stack group create --name <DEPLOYMENT_NAME> --resource-group <RESOURCE_GROUP> --template-file <FILE>.bicep --dm None --aou deleteResources --yes
+```
+
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file with the following content.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: DataflowEndpoint
+metadata:
+ name: <ENDPOINT_NAME>
+ namespace: azure-iot-operations
+spec:
+ endpointType: Mqtt
+ mqttSettings:
+ host: <NAMESPACE>.<REGION>-1.ts.eventgrid.azure.net:8883
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ {}
+ tls:
+ mode: Enabled
+```
+
+Then apply the manifest file to the Kubernetes cluster.
+
+```bash
+kubectl apply -f <FILE>.yaml
```
For other MQTT brokers, you can configure the endpoint, TLS, authentication, and
1. Select **Apply** to provision the endpoint.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-spec:
- endpointType: Mqtt
- mqttSettings:
- host: <HOST>:<PORT>
- authentication:
- # See available authentication methods below
- tls:
- mode: Enabled # or Disabled
- trustedCaCertificateConfigMapRef: <YOUR-CA-CERTIFICATE-CONFIG-MAP>
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+spec:
+ endpointType: Mqtt
+ mqttSettings:
+ host: <HOST>:<PORT>
+ authentication:
+ # See available authentication methods below
+ tls:
+ mode: Enabled # or Disabled
+ trustedCaCertificateConfigMapRef: <YOUR-CA-CERTIFICATE-CONFIG-MAP>
+```
+ To customize the MQTT endpoint settings, see the following sections for more information.
Enter the following settings for the endpoint:
| X509 intermediate certificates | The intermediate certificates for the X.509 client certificate chain. | | X509 client key | The private key corresponding to the X.509 client certificate. |
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- authentication:
- method: X509Certificate
- x509CertificateSettings:
- secretRef: <YOUR-X509-SECRET-NAME>
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ authentication:
+ method: X509Certificate
+ x509CertificateSettings:
+ secretRef: <YOUR-X509-SECRET-NAME>
+```
+ #### System-assigned managed identity
Then, configure the endpoint with system-assigned managed identity settings. In
In the operations experience dataflow endpoint settings page, select the **Basic** tab then choose **Authentication method** > **System assigned managed identity**.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- authentication:
- method: SystemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:
- {}
-```
- # [Bicep](#tab/bicep)
mqttSettings: {
} ``` --
-If you need to set a different audience, you can specify it in the settings.
-
-# [Portal](#tab/portal)
-
-Not supported.
- # [Kubernetes](#tab/kubernetes) ```yaml
mqttSettings:
authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings:
- audience: https://<AUDIENCE>
+ {}
``` ++
+If you need to set a different audience, you can specify it in the settings.
+
+# [Portal](#tab/portal)
+
+Not supported.
+ # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ authentication:
+ method: SystemAssignedManagedIdentity
+ systemAssignedManagedIdentitySettings:
+ audience: https://<AUDIENCE>
+```
+ #### User-assigned managed identity
In the operations experience dataflow endpoint settings page, select the **Basic
Enter the user assigned managed identity client ID, tenant ID, and scope in the appropriate fields.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- authentication:
- method: UserAssignedManagedIdentity
- userAssignedManagedIdentitySettings:
- clientId: <ID>
- tenantId: <ID>
- scope: <SCOPE>
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ authentication:
+ method: UserAssignedManagedIdentity
+ userAssignedManagedIdentitySettings:
+ clientId: <ID>
+ tenantId: <ID>
+ scope: <SCOPE>
+```
+ #### Kubernetes service account token (SAT)
In the operations experience dataflow endpoint settings page, select the **Basic
Enter the service audience.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- authentication:
- method: ServiceAccountToken
- serviceAccountTokenSettings:
- audience: <YOUR_SERVICE_ACCOUNT_AUDIENCE>
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ authentication:
+ method: ServiceAccountToken
+ serviceAccountTokenSettings:
+ audience: <YOUR_SERVICE_ACCOUNT_AUDIENCE>
+```
+ If the audience isn't specified, the default audience for the Azure IoT Operations MQTT broker is used.
To use anonymous authentication, set the authentication method to `Anonymous`.
Not yet supported in the operations experience. See [known issues](../troubleshoot/known-issues.md).
+# [Bicep](#tab/bicep)
+
+Not yet supported with Bicep. See [known issues](../troubleshoot/known-issues.md).
+ # [Kubernetes](#tab/kubernetes) ```yaml
mqttSettings:
{} ```
-# [Bicep](#tab/bicep)
-
-Not yet supported with Bicep. See [known issues](../troubleshoot/known-issues.md).
- ## Advanced settings
In the operations experience, select the **Advanced** tab for the dataflow endpo
| Client ID prefix | The client ID is generated by appending the dataflow instance name to the prefix. | | Cloud event attributes | For *Propagate*, CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the message is passed through as is. For *Create or re-map*, CloudEvent properties are passed through for messages that contain the required properties. If the message doesn't contain the required properties, the properties are generated. |
-# [Kubernetes](#tab/kubernetes)
-
-You can set these settings in the dataflow endpoint manifest file.
+# [Bicep](#tab/bicep)
-```yaml
-mqttSettings:
+```bicep
+mqttSettings: {
qos: 1 retain: Keep sessionExpirySeconds: 3600 keepAliveSeconds: 60 maxInflightMessages: 100 protocol: WebSockets
- clientIdPrefix: dataflow
- CloudEventAttributes: Propagate # or CreateOrRemap
+ clientIdPrefix: 'dataflow'
+ CloudEventAttributes: 'Propagate' // or 'CreateOrRemap'
+}
```
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
-```bicep
-mqttSettings: {
+You can set these settings in the dataflow endpoint manifest file.
+
+```yaml
+mqttSettings:
qos: 1 retain: Keep sessionExpirySeconds: 3600 keepAliveSeconds: 60 maxInflightMessages: 100 protocol: WebSockets
- clientIdPrefix: 'dataflow'
- CloudEventAttributes: 'Propagate' // or 'CreateOrRemap'
-}
+ clientIdPrefix: dataflow
+ CloudEventAttributes: Propagate # or CreateOrRemap
```
To enable or disable TLS for the Kafka endpoint, update the `mode` setting in th
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the checkbox next to **TLS mode enabled**.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- tls:
- mode: Enabled # or Disabled
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ tls:
+ mode: Enabled # or Disabled
+```
+ The TLS mode can be set to `Enabled` or `Disabled`. If the mode is set to `Enabled`, the dataflow uses a secure connection to the Kafka broker. If the mode is set to `Disabled`, the dataflow uses an insecure connection to the Kafka broker.
Configure the trusted CA certificate for the MQTT endpoint to establish a secure
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Trusted CA certificate config map** field to specify the ConfigMap containing the trusted CA certificate.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- tls:
- trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ tls:
+ trustedCaCertificateConfigMapRef: <YOUR_CA_CERTIFICATE>
+```
+ This ConfigMap should contain the CA certificate in PEM format. The ConfigMap must be in the same namespace as the MQTT dataflow resource. For example:
You can set a client ID prefix for the MQTT client. The client ID is generated b
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Client ID prefix** field to specify the prefix.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- clientIdPrefix: <YOUR_PREFIX>
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ clientIdPrefix: <YOUR_PREFIX>
+```
+ ### QoS
You can set the Quality of Service (QoS) level for the MQTT messages to either 1
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Quality of service (QoS)** field to specify the QoS level.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- qos: 1 # Or 0
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ qos: 1 # Or 0
+```
+ ### Retain
To configure retain settings:
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Retain** field to specify the retain setting.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- retain: Keep # or Never
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ retain: Keep # or Never
+```
+ The *retain* setting only takes effect if the dataflow uses MQTT endpoint as both source and destination. For example, in an [MQTT bridge](tutorial-mqtt-bridge.md) scenario.
You can set the session expiry interval for the dataflow MQTT client. The sessio
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Session expiry** field to specify the session expiry interval.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- sessionExpirySeconds: 3600
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ sessionExpirySeconds: 3600
+```
+ ### MQTT or WebSockets protocol
By default, WebSockets isn't enabled. To use MQTT over WebSockets, set the `prot
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Protocol** field to specify the protocol.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- protocol: WebSockets
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ protocol: WebSockets
+```
+ ### Max inflight messages
You can set the maximum number of inflight messages that the dataflow MQTT clien
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Maximum in-flight messages** field to specify the maximum number of inflight messages.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- maxInflightMessages: 100
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ maxInflightMessages: 100
+```
+ For subscribe when the MQTT endpoint is used as a source, this is the receive maximum. For publish when the MQTT endpoint is used as a destination, this is the maximum number of messages to send before waiting for an acknowledgment.
You can set the keep alive interval for the dataflow MQTT client. The keep alive
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Keep alive** field to specify the keep alive interval.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- keepAliveSeconds: 60
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ keepAliveSeconds: 60
+```
+ ### CloudEvents
The `CloudEventAttributes` options are `Propagate` or`CreateOrRemap`. To configu
In the operations experience dataflow endpoint settings page, select the **Advanced** tab then use the **Cloud event attributes** field to specify the CloudEvents setting.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-mqttSettings:
- CloudEventAttributes: Propagate # or CreateOrRemap
-```
- # [Bicep](#tab/bicep) ```bicep
mqttSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+mqttSettings:
+ CloudEventAttributes: Propagate # or CreateOrRemap
+```
+ The following sections provide more information about the CloudEvents settings.
CloudEvent properties are passed through for messages that contain the required
| `time` | No | Generated as RFC 3339 in the target client | | `datacontenttype` | No | Changed to the output data content type after the optional transform stage | | `dataschema` | No | Schema defined in the schema registry |+
+## Next steps
+
+- [Create a dataflow](howto-create-dataflow.md)
iot-operations Howto Create Dataflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-create-dataflow.md
To create a dataflow in [operations experience](https://iotoperations.azure.com/
:::image type="content" source="media/howto-create-dataflow/create-dataflow.png" alt-text="Screenshot using operations experience to create a dataflow.":::
-# [Kubernetes](#tab/kubernetes)
-
-Create a Kubernetes manifest `.yaml` file to start creating a dataflow. This example shows the structure of the dataflow containing the source, transformation, and destination configurations.
-
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: Dataflow
-metadata:
- name: <DATAFLOW_NAME>
- namespace: azure-iot-operations
-spec:
- # Reference to the default dataflow profile
- # This field is required when configuring via Kubernetes YAML
- # The syntax is different when using Bicep
- profileRef: default
- mode: Enabled
- operations:
- - operationType: Source
- sourceSettings:
- # See source configuration section
- - operationType: BuiltInTransformation
- builtInTransformationSettings:
- # See transformation configuration section
- - operationType: Destination
- destinationSettings:
- # See destination configuration section
-```
- # [Bicep](#tab/bicep) Create a Bicep `.bicep` file to start creating a dataflow. This example shows the structure of the dataflow containing the source, transformation, and destination configurations.
resource dataflow 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@
} ```
+# [Kubernetes](#tab/kubernetes)
+
+Create a Kubernetes manifest `.yaml` file to start creating a dataflow. This example shows the structure of the dataflow containing the source, transformation, and destination configurations.
+
+```yaml
+apiVersion: connectivity.iotoperations.azure.com/v1beta1
+kind: Dataflow
+metadata:
+ name: <DATAFLOW_NAME>
+ namespace: azure-iot-operations
+spec:
+ # Reference to the default dataflow profile
+ # This field is required when configuring via Kubernetes YAML
+ # The syntax is different when using Bicep
+ profileRef: default
+ mode: Enabled
+ operations:
+ - operationType: Source
+ sourceSettings:
+ # See source configuration section
+ - operationType: BuiltInTransformation
+ builtInTransformationSettings:
+ # See transformation configuration section
+ - operationType: Destination
+ destinationSettings:
+ # See destination configuration section
+```
+ Review the following sections to learn how to configure the operation types of the dataflow.
You can use an [asset](../discover-manage-assets/overview-manage-assets.md) as t
1. Select **Apply** to use the asset as the source endpoint.
-# [Kubernetes](#tab/kubernetes)
+# [Bicep](#tab/bicep)
Configuring an asset as a source is only available in the operations experience.
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
Configuring an asset as a source is only available in the operations experience.
Configuring an asset as a source is only available in the operations experience.
1. Select **Apply**.
-# [Kubernetes](#tab/kubernetes)
-
-For example, to configure a source using an MQTT endpoint and two MQTT topic filters, use the following configuration:
-
-```yaml
-sourceSettings:
- endpointRef: default
- dataSources:
- - thermostats/+/telemetry/temperature/#
- - humidifiers/+/telemetry/humidity/#
-```
-
-Because `dataSources` allows you to specify MQTT or Kafka topics without modifying the endpoint configuration, you can reuse the endpoint for multiple dataflows even if the topics are different. To learn more, see [Configure data sources](#configure-data-sources-mqtt-or-kafka-topics).
- # [Bicep](#tab/bicep) The MQTT endpoint is configured in the Bicep template file. For example, the following endpoint is a source for the dataflow.
sourceSettings: {
Here, `dataSources` allow you to specify multiple MQTT or Kafka topics without needing to modify the endpoint configuration. This means the same endpoint can be reused across multiple dataflows, even if the topics vary. To learn more, see [Configure data sources](#configure-data-sources-mqtt-or-kafka-topics).
+# [Kubernetes](#tab/kubernetes)
+
+For example, to configure a source using an MQTT endpoint and two MQTT topic filters, use the following configuration:
+
+```yaml
+sourceSettings:
+ endpointRef: default
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - humidifiers/+/telemetry/humidity/#
+```
+
+Because `dataSources` allows you to specify MQTT or Kafka topics without modifying the endpoint configuration, you can reuse the endpoint for multiple dataflows even if the topics are different. To learn more, see [Configure data sources](#configure-data-sources-mqtt-or-kafka-topics).
+ For more information about the default MQTT endpoint and creating an MQTT endpoint as a dataflow source, see [MQTT Endpoint](howto-configure-mqtt-endpoint.md).
To configure, use Kubernetes YAML or Bicep. Replace placeholder values with your
Using a custom MQTT or Kafka endpoint as a source is currently not supported in the operations experience.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-sourceSettings:
- endpointRef: <CUSTOM_ENDPOINT_NAME>
- dataSources:
- - <TOPIC_1>
- - <TOPIC_2>
- # See section on configuring MQTT or Kafka topics for more information
-```
- # [Bicep](#tab/bicep) ```bicep
sourceSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+sourceSettings:
+ endpointRef: <CUSTOM_ENDPOINT_NAME>
+ dataSources:
+ - <TOPIC_1>
+ - <TOPIC_2>
+ # See section on configuring MQTT or Kafka topics for more information
+```
+ ### Configure data sources (MQTT or Kafka topics)
In the operations experience dataflow **Source details**, select **MQTT**, then
> [!NOTE] > Only one MQTT topic filter can be specified in the operations experience. To use multiple MQTT topic filters, use Bicep or Kubernetes.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-sourceSettings:
- endpointRef: <MQTT_ENDPOINT_NAME>
- dataSources:
- - <MQTT_TOPIC_FILTER_1>
- - <MQTT_TOPIC_FILTER_2>
- # Add more MQTT topic filters as needed
-```
-
-Example with multiple MQTT topic filters with wildcards:
-
-```yaml
-sourceSettings:
- endpointRef: default
- dataSources:
- - thermostats/+/telemetry/temperature/#
- - humidifiers/+/telemetry/humidity/#
-```
-
-Here, the wildcard `+` is used to select all devices under the `thermostats` and `humidifiers` topics. The `#` wildcard is used to select all telemetry messages under all subtopics of the `temperature` and `humidity` topics.
- # [Bicep](#tab/bicep) ```bicep
sourceSettings: {
Here, the wildcard `+` is used to select all devices under the `thermostats` and `humidifiers` topics. The `#` wildcard is used to select all telemetry messages under all subtopics of the `temperature` and `humidity` topics.
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+sourceSettings:
+ endpointRef: <MQTT_ENDPOINT_NAME>
+ dataSources:
+ - <MQTT_TOPIC_FILTER_1>
+ - <MQTT_TOPIC_FILTER_2>
+ # Add more MQTT topic filters as needed
+```
+
+Example with multiple MQTT topic filters with wildcards:
+
+```yaml
+sourceSettings:
+ endpointRef: default
+ dataSources:
+ - thermostats/+/telemetry/temperature/#
+ - humidifiers/+/telemetry/humidity/#
+```
+
+Here, the wildcard `+` is used to select all devices under the `thermostats` and `humidifiers` topics. The `#` wildcard is used to select all telemetry messages under all subtopics of the `temperature` and `humidity` topics.
+ #### Kafka topics
To configure the Kafka topics:
Using a Kafka endpoint as a source is currently not supported in the operations experience.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-sourceSettings:
- endpointRef: <KAFKA_ENDPOINT_NAME>
- dataSources:
- - <KAFKA_TOPIC_1>
- - <KAFKA_TOPIC_2>
- # Add more Kafka topics as needed
-```
- # [Bicep](#tab/bicep) ```bicep
sourceSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+sourceSettings:
+ endpointRef: <KAFKA_ENDPOINT_NAME>
+ dataSources:
+ - <KAFKA_TOPIC_1>
+ - <KAFKA_TOPIC_2>
+ # Add more Kafka topics as needed
+```
+ ### Specify schema to deserialize data
To configure the schema used to deserialize the incoming messages from a source:
In operations experience dataflow **Source details**, select **MQTT** and use the **Message schema** field to specify the schema. You can use the **Upload** button to upload a schema file first. To learn more, see [Understand message schemas](concept-schema-registry.md).
-# [Kubernetes](#tab/kubernetes)
+# [Bicep](#tab/bicep)
Once you have used the [schema registry to store the schema](concept-schema-registry.md), you can reference it in the dataflow configuration.
-```yaml
-sourceSettings:
+```bicep
+sourceSettings: {
serializationFormat: Json schemaRef: aio-sr://<SCHEMA_NAMESPACE>/<SCHEMA_NAME>:<VERSION>
+}
```
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
Once you have used the [schema registry to store the schema](concept-schema-registry.md), you can reference it in the dataflow configuration.
-```bicep
-sourceSettings: {
+```yaml
+sourceSettings:
serializationFormat: Json schemaRef: aio-sr://<SCHEMA_NAMESPACE>/<SCHEMA_NAME>:<VERSION>
-}
```
To use shared subscriptions with MQTT sources, you can specify the shared subscr
In operations experience dataflow **Source details**, select **MQTT** and use the **MQTT topic** field to specify the shared subscription group and topic.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-sourceSettings:
- dataSources:
- - $shared/<GROUP_NAME>/<TOPIC_FILTER>
-```
- # [Bicep](#tab/bicep) ```bicep
sourceSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+sourceSettings:
+ dataSources:
+ - $shared/<GROUP_NAME>/<TOPIC_FILTER>
+```
+ > [!NOTE]
In the operations experience, select **Dataflow** > **Add transform (optional)**
:::image type="content" source="media/howto-create-dataflow/dataflow-transform.png" alt-text="Screenshot using operations experience to add a transform to a dataflow.":::
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-builtInTransformationSettings:
- datasets:
- # See section on enriching data
- filter:
- # See section on filtering data
- map:
- # See section on mapping data
-```
- # [Bicep](#tab/bicep) ```bicep
builtInTransformationSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+builtInTransformationSettings:
+ datasets:
+ # See section on enriching data
+ filter:
+ # See section on filtering data
+ map:
+ # See section on mapping data
+```
+ ### Enrich: Add reference data
You can load sample data into the DSS by using the [DSS set tool sample](https:/
Currently, the enrich operation isn't available in the operations experience.
-# [Kubernetes](#tab/kubernetes)
+# [Bicep](#tab/bicep)
-For example, you could use the `deviceId` field in the source data to match the `asset` field in the dataset:
+This example shows how you could use the `deviceId` field in the source data to match the `asset` field in the dataset:
-```yaml
-builtInTransformationSettings:
- datasets:
- - key: assetDataset
- inputs:
- - $source.deviceId # - $1
- - $context(assetDataset).asset # - $2
- expression: $1 == $2
+```bicep
+builtInTransformationSettings: {
+ datasets: [
+ {
+ key: 'assetDataset'
+ inputs: [
+ '$source.deviceId', // $1
+ '$context(assetDataset).asset' // - $2
+ ]
+ expression: '$1 == $2'
+ }
+ ]
+}
``` If the dataset has a record with the `asset` field, similar to:
If the dataset has a record with the `asset` field, similar to:
The data from the source with the `deviceId` field matching `thermostat1` has the `location` and `manufacturer` fields available in filter and map stages.
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
-This example shows how you could use the `deviceId` field in the source data to match the `asset` field in the dataset:
+For example, you could use the `deviceId` field in the source data to match the `asset` field in the dataset:
-```bicep
-builtInTransformationSettings: {
- datasets: [
- {
- key: 'assetDataset'
- inputs: [
- '$source.deviceId', // $1
- '$context(assetDataset).asset' // - $2
- ]
- expression: '$1 == $2'
- }
- ]
-}
+```yaml
+builtInTransformationSettings:
+ datasets:
+ - key: assetDataset
+ inputs:
+ - $source.deviceId # - $1
+ - $context(assetDataset).asset # - $2
+ expression: $1 == $2
``` If the dataset has a record with the `asset` field, similar to:
To filter the data on a condition, you can use the `filter` stage. The condition
For example, you could use a filter condition like `temperature > 20` to filter data less than or equal to 20 based on the temperature field.
-# [Kubernetes](#tab/kubernetes)
-
-For example, you could use the `temperature` field in the source data to filter the data:
-
-```yaml
-builtInTransformationSettings:
- filter:
- - inputs:
- - temperature ? $last # - $1
- expression: "$1 > 20"
-```
-
-If the `temperature` field is greater than 20, the data is passed to the next stage. If the `temperature` field is less than or equal to 20, the data is filtered.
- # [Bicep](#tab/bicep) For example, you could use the `temperature` field in the source data to filter the data:
builtInTransformationSettings: {
If the `temperature` field is greater than 20, the data is passed to the next stage. If the `temperature` field is less than or equal to 20, the data is filtered.
+# [Kubernetes](#tab/kubernetes)
+
+For example, you could use the `temperature` field in the source data to filter the data:
+
+```yaml
+builtInTransformationSettings:
+ filter:
+ - inputs:
+ - temperature ? $last # - $1
+ expression: "$1 > 20"
+```
+
+If the `temperature` field is greater than 20, the data is passed to the next stage. If the `temperature` field is less than or equal to 20, the data is filtered.
+ ### Map: Move data from one field to another
In the operations experience, mapping is currently supported using **Compute** t
1. Select **Apply**.
-# [Kubernetes](#tab/kubernetes)
-
-For example, you could use the `temperature` field in the source data to convert the temperature to Celsius and store it in the `temperatureCelsius` field. You could also enrich the source data with the `location` field from the contextualization dataset:
-
-```yaml
-builtInTransformationSettings:
- map:
- - inputs:
- - temperature # - $1
- expression: "($1 - 32) * 5/9"
- output: temperatureCelsius
- - inputs:
- - $context(assetDataset).location
- output: location
-```
- # [Bicep](#tab/bicep) For example, you could use the `temperature` field in the source data to convert the temperature to Celsius and store it in the `temperatureCelsius` field. You could also enrich the source data with the `location` field from the contextualization dataset:
builtInTransformationSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+For example, you could use the `temperature` field in the source data to convert the temperature to Celsius and store it in the `temperatureCelsius` field. You could also enrich the source data with the `location` field from the contextualization dataset:
+
+```yaml
+builtInTransformationSettings:
+ map:
+ - inputs:
+ - temperature # - $1
+ expression: "($1 - 32) * 5/9"
+ output: temperatureCelsius
+ - inputs:
+ - $context(assetDataset).location
+ output: location
+```
+ To learn more, see [Map data by using dataflows](concept-dataflow-mapping.md) and [Convert data by using dataflows](concept-dataflow-conversions.md).
If you want to serialize the data before sending it to the destination, you need
Currently, specifying the output schema and serialization isn't supported in the operations experience.
-# [Kubernetes](#tab/kubernetes)
+# [Bicep](#tab/bicep)
Once you [upload a schema to the schema registry](concept-schema-registry.md#upload-schema), you can reference it in the dataflow configuration.
-```yaml
-builtInTransformationSettings:
+```bicep
+builtInTransformationSettings: {
serializationFormat: Delta schemaRef: aio-sr://<SCHEMA_NAMESPACE>/<SCHEMA>:<VERSION>
+}
```
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
Once you [upload a schema to the schema registry](concept-schema-registry.md#upload-schema), you can reference it in the dataflow configuration.
-```bicep
-builtInTransformationSettings: {
+```yaml
+builtInTransformationSettings:
serializationFormat: Delta schemaRef: aio-sr://<SCHEMA_NAMESPACE>/<SCHEMA>:<VERSION>
-}
```
To send data to a destination other than the local MQTT broker, create a dataflo
1. Select **Proceed** to configure the destination. 1. Enter the required settings for the destination, including the topic or table to send the data to. See [Configure data destination (topic, container, or table)](#configure-data-destination-topic-container-or-table) for more information.
-# [Kubernetes](#tab/kubernetes)
-
-```yaml
-destinationSettings:
- endpointRef: <CUSTOM_ENDPOINT_NAME>
- dataDestination: <TOPIC_OR_TABLE> # See section on configuring data destination
-```
- # [Bicep](#tab/bicep) ```bicep
destinationSettings: {
} ```
+# [Kubernetes](#tab/kubernetes)
+
+```yaml
+destinationSettings:
+ endpointRef: <CUSTOM_ENDPOINT_NAME>
+ dataDestination: <TOPIC_OR_TABLE> # See section on configuring data destination
+```
+ ### Configure data destination (topic, container, or table)
When using the operations experience, the data destination field is automaticall
:::image type="content" source="media/howto-create-dataflow/data-destination.png" alt-text="Screenshot showing the operations experience prompting the user to enter an MQTT topic given the endpoint type.":::
-# [Kubernetes](#tab/kubernetes)
+# [Bicep](#tab/bicep)
The syntax is the same for all dataflow endpoints:
-```yaml
-destinationSettings:
+```bicep
+destinationSettings: {
endpointRef: <CUSTOM_ENDPOINT_NAME> dataDestination: <TOPIC_OR_TABLE>
+}
``` For example, to send data back to the local MQTT broker a static MQTT topic, use the following configuration:
-```yaml
-destinationSettings:
+```bicep
+destinationSettings: {
endpointRef: default dataDestination: example-topic
+}
``` Or, if you have custom event hub endpoint, the configuration would look like:
-```yaml
-destinationSettings:
+```bicep
+destinationSettings: {
endpointRef: my-eh-endpoint dataDestination: individual-event-hub
+}
``` Another example using a storage endpoint as the destination:
-```yaml
-destinationSettings:
+```bicep
+destinationSettings: {
endpointRef: my-adls-endpoint dataDestination: my-container
+}
```
-# [Bicep](#tab/bicep)
+# [Kubernetes](#tab/kubernetes)
The syntax is the same for all dataflow endpoints:
-```bicep
-destinationSettings: {
+```yaml
+destinationSettings:
endpointRef: <CUSTOM_ENDPOINT_NAME> dataDestination: <TOPIC_OR_TABLE>
-}
``` For example, to send data back to the local MQTT broker a static MQTT topic, use the following configuration:
-```bicep
-destinationSettings: {
+```yaml
+destinationSettings:
endpointRef: default dataDestination: example-topic
-}
``` Or, if you have custom event hub endpoint, the configuration would look like:
-```bicep
-destinationSettings: {
+```yaml
+destinationSettings:
endpointRef: my-eh-endpoint dataDestination: individual-event-hub
-}
``` Another example using a storage endpoint as the destination:
-```bicep
-destinationSettings: {
+```yaml
+destinationSettings:
endpointRef: my-adls-endpoint dataDestination: my-container
-}
```
Select the dataflow you want to export and select **Export** from the toolbar.
:::image type="content" source="media/howto-create-dataflow/dataflow-export.png" alt-text="Screenshot using operations experience to export a dataflow.":::
+# [Bicep](#tab/bicep)
+
+Bicep is infrastructure as code and no export is required. Use the [Bicep template file to create a dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) to quickly set up and configure dataflows.
+ # [Kubernetes](#tab/kubernetes) ```bash kubectl get dataflow my-dataflow -o yaml > my-dataflow.yaml ```
-# [Bicep](#tab/bicep)
+
-Bicep is infrastructure as code and no export is required. Use the [Bicep template file to create a dataflow](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) to quickly set up and configure dataflows.
+## Next steps
-
+- [Map data by using dataflows](concept-dataflow-mapping.md)
+- [Convert data by using dataflows](concept-dataflow-conversions.md)
+- [Enrich data by using dataflows](concept-dataflow-enrich.md)
+- [Understand message schemas](concept-schema-registry.md)
+- [Manage dataflow profiles](howto-configure-dataflow-profile.md)
iot-operations Tutorial Mqtt Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/tutorial-mqtt-bridge.md
Using Azure CLI, find the principal ID for the Azure IoT Operations Arc extensio
```azurecli export PRINCIPAL_ID=$(az k8s-extension list \ --resource-group $RESOURCE_GROUP \
- --cluster-name <CLUSTER-NAME> \
+ --cluster-name $CLUSTER_NAME \
--cluster-type connectedClusters \ --query "[?extensionType=='microsoft.iotoperations'].identity.principalId | [0]" -o tsv) echo $PRINCIPAL_ID
Take note of the output value for `topicSpacesConfiguration.hostname` that is a
example.region-1.ts.eventgrid.azure.net ```
-## Create an Azure IoT Operations MQTT broker dataflow endpoint
+## Understand the default Azure IoT Operations MQTT broker dataflow endpoint
-# [Bicep](#tab/bicep)
-
-The dataflow and dataflow endpoints for MQTT broker and Azure Event Grid can be deployed as standard Azure resources since they have Azure Resource Provider (RPs) implementations. This Bicep template file from [Bicep File for MQTT-bridge dataflow Tutorial](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) deploys the necessary dataflow and dataflow endpoints.
-
-Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `eventGridHostName` with yours.
+By default, Azure IoT Operations deploys an MQTT broker as well as an MQTT broker dataflow endpoint. The MQTT broker dataflow endpoint is used to connect to the MQTT broker. The default configuration uses the built-in service account token for authentication. The endpoint is named `default` and is available in the same namespace as Azure IoT Operations. The endpoint is used as the source for the dataflows you create in the next steps.
-Next, execute the following command in your terminal:
+To learn more about the default MQTT broker dataflow endpoint, see [Azure IoT Operations local MQTT broker default endpoint](../connect-to-cloud/howto-configure-mqtt-endpoint.md#default-endpoint).
-```azurecli
-az stack group create --name MyDeploymentStack --resource-group $RESOURCE_GROUP --template-file /workspaces/explore-iot-operations/mqtt-bridge.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
-```
-This endpoint is the source for the dataflow that sends messages to Azure Event Grid.
-
-```bicep
-resource MqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = {
- parent: aioInstance
- name: 'aiomq'
- extendedLocation: {
- name: customLocation.id
- type: 'CustomLocation'
- }
- properties: {
- endpointType: 'Mqtt'
- mqttSettings: {
- authentication: {
- method: 'ServiceAccountToken'
- serviceAccountTokenSettings: {
- audience: 'aio-internal'
- }
- }
- host: 'aio-broker:18883'
- tls: {
- mode: 'Enabled'
- trustedCaCertificateConfigMapRef: 'azure-iot-operations-aio-ca-trust-bundle'
- }
- }
- }
-}
-```
-
-# [Kubernetes](#tab/kubernetes)
-
-Create dataflow endpoint for the Azure IoT Operations built-in MQTT broker. This endpoint is the source for the dataflow that sends messages to Azure Event Grid.
+## Create an Azure Event Grid dataflow endpoint
-```yaml
-apiVersion: connectivity.iotoperations.azure.com/v1beta1
-kind: DataflowEndpoint
-metadata:
- name: mq
- namespace: azure-iot-operations
-spec:
- endpointType: Mqtt
- mqttSettings:
- authentication:
- method: ServiceAccountToken
- serviceAccountTokenSettings: {}
-```
+Create dataflow endpoint for the Azure Event Grid. This endpoint is the destination for the dataflow that sends messages to Azure Event Grid. Replace `<EVENT_GRID_HOSTNAME>` with the MQTT hostname you got from the previous step. Include the port number `8883`.
-
+# [Bicep](#tab/bicep)
-This is the default configuration for the Azure IoT Operations MQTT broker endpoint. The authentication method is set to `ServiceAccountToken` to use the built-in service account token for authentication.
+The dataflow and dataflow endpoints Azure Event Grid can be deployed as standard Azure resources since they have Azure Resource Provider (RPs) implementations. This Bicep template file from [Bicep File for MQTT-bridge dataflow Tutorial](https://github.com/Azure-Samples/explore-iot-operations/blob/main/samples/quickstarts/dataflow.bicep) deploys the necessary dataflow and dataflow endpoints.
-## Create an Azure Event Grid dataflow endpoint
+Download the file to your local, and make sure to replace the values for `customLocationName`, `aioInstanceName`, `eventGridHostName` with yours.
-# [Bicep](#tab/bicep)
+```bicep
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+param eventGridHostName string = '<EVENT_GRID_HOSTNAME>:8883'
-Since you already deployed the resources in the previous section, there's no additional deployment needed. However, this endpoint is the destination for the dataflow that sends messages to Azure Event Grid. Replace `<EVENT-GRID-HOSTNAME>` with the hostname you got from the previous step. Include the port number `8883`.
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
-```bicep
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
resource remoteMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dataflowEndpoints@2024-08-15-preview' = { parent: aioInstance name: 'eventgrid'
resource remoteMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dat
method: 'SystemAssignedManagedIdentity' systemAssignedManagedIdentitySettings: {} }
- host: '<NAMESPACE>.<REGION>-1.ts.eventgrid.azure.net:8883'
+ host: eventGridHostName
tls: { mode: 'Enabled' }
resource remoteMqttBrokerDataflowEndpoint 'Microsoft.IoTOperations/instances/dat
} ```
-# [Kubernetes](#tab/kubernetes)
+Next, execute the following command in your terminal. Replace `<FILE>` with the name of the Bicep file you downloaded.
-Create dataflow endpoint for the Azure Event Grid. This endpoint is the destination for the dataflow that sends messages to Azure Event Grid. Replace `<EVENT-GRID-HOSTNAME>` with the hostname you got from the previous step. Include the port number `8883`.
+```azurecli
+az stack group create --name DeployDataflowEndpoint --resource-group $RESOURCE_GROUP --template-file <FILE>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
+```
+
+# [Kubernetes](#tab/kubernetes)
```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1
metadata:
spec: endpointType: Mqtt mqttSettings:
- host: <EVENT-GRID-HOSTNAME>:8883
+ host: <EVENT_GRID_HOSTNAME>:8883
authentication: method: SystemAssignedManagedIdentity systemAssignedManagedIdentitySettings: {}
Since the Event Grid MQTT broker requires TLS, the `tls` setting is enabled. No
## Create dataflows
-# [Bicep](#tab/bicep)
+Create two dataflows with the Azure IoT Operations MQTT broker endpoint as the source and the Azure Event Grid endpoint as the destination, and vice versa. No need to configure transformation.
-In this example, there are two dataflows with the Azure IoT Operations MQTT broker endpoint as the source and the Azure Event Grid endpoint as the destination, and vice versa. No need to configure transformation.
+# [Bicep](#tab/bicep)
```bicep
+param customLocationName string = '<CUSTOM_LOCATION_NAME>'
+param aioInstanceName string = '<AIO_INSTANCE_NAME>'
+
+resource customLocation 'Microsoft.ExtendedLocation/customLocations@2021-08-31-preview' existing = {
+ name: customLocationName
+}
+resource aioInstance 'Microsoft.IoTOperations/instances@2024-08-15-preview' existing = {
+ name: aioInstanceName
+}
+resource defaultDataflowProfile 'Microsoft.IoTOperations/instances/dataflowProfiles@2024-08-15-preview' existing = {
+ parent: aioInstance
+ name: 'default'
+}
resource dataflow_1 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflows@2024-08-15-preview' = { parent: defaultDataflowProfile name: 'local-to-remote'
resource dataflow_1 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflow
{ operationType: 'Source' sourceSettings: {
- endpointRef: MqttBrokerDataflowEndpoint.name
+ endpointRef: 'default'
dataSources: array('tutorial/local') } }
resource dataflow_1 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflow
operationType: 'Destination' destinationSettings: { endpointRef: remoteMqttBrokerDataflowEndpoint.name
- dataDestination: 'telemetry/iot-mq'
+ dataDestination: 'telemetry/aio'
} } ]
resource dataflow_2 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflow
{ operationType: 'Destination' destinationSettings: {
- endpointRef: MqttBrokerDataflowEndpoint.name
+ endpointRef: 'default'
dataDestination: 'tutorial/cloud' } }
resource dataflow_2 'Microsoft.IoTOperations/instances/dataflowProfiles/dataflow
} ```
+Like the dataflow endpoint, execute the following command in your terminal:
+
+```azurecli
+az stack group create --name DeployDataflows --resource-group $RESOURCE_GROUP --template-file <FILE>.bicep --action-on-unmanage 'deleteResources' --deny-settings-mode 'none' --yes
+```
+ # [Kubernetes](#tab/kubernetes)
-Create two dataflows with the Azure IoT Operations MQTT broker endpoint as the source and the Azure Event Grid endpoint as the destination, and vice versa. No need to configure transformation.
```yaml apiVersion: connectivity.iotoperations.azure.com/v1beta1
spec:
operations: - operationType: Source sourceSettings:
- endpointRef: mq
+ endpointRef: default
dataSources: - tutorial/local - operationType: Destination destinationSettings: endpointRef: eventgrid
- dataDestination: telemetry/iot-mq
+ dataDestination: telemetry/aio
apiVersion: connectivity.iotoperations.azure.com/v1beta1 kind: Dataflow
spec:
- telemetry/# - operationType: Destination destinationSettings:
- endpointRef: mq
+ endpointRef: default
dataDestination: tutorial/cloud ```
Together, the two dataflows form an MQTT bridge, where you:
* Use TLS for both remote and local brokers * Use system-assigned managed identity for authentication to the remote broker * Use Kubernetes service account for authentication to the local broker
-* Use the topic map to map the `tutorial/local` topic to the `telemetry/iot-mq` topic on the remote broker
+* Use the topic map to map the `tutorial/local` topic to the `telemetry/aio` topic on the remote broker
* Use the topic map to map the `telemetry/#` topic on the remote broker to the `tutorial/cloud` topic on the local broker
-When you publish to the `tutorial/local` topic on the local Azure IoT Operations MQTT broker, the message is bridged to the `telemetry/iot-mq` topic on the remote Event Grid MQTT broker. Then, the message is bridged back to the `tutorial/cloud` topic (because the `telemetry/#` wildcard topic captures it) on the local Azure IoT Operations MQTT broker. Similarly, when you publish to the `telemetry/iot-mq` topic on the remote Event Grid MQTT broker, the message is bridged to the `tutorial/cloud` topic on the local Azure IoT Operations MQTT broker.
+When you publish to the `tutorial/local` topic on the local Azure IoT Operations MQTT broker, the message is bridged to the `telemetry/aio` topic on the remote Event Grid MQTT broker. Then, the message is bridged back to the `tutorial/cloud` topic (because the `telemetry/#` wildcard topic captures it) on the local Azure IoT Operations MQTT broker. Similarly, when you publish to the `telemetry/aio` topic on the remote Event Grid MQTT broker, the message is bridged to the `tutorial/cloud` topic on the local Azure IoT Operations MQTT broker.
## Deploy MQTT client
Currently, bicep doesn't apply to deploy MQTT client.
```yaml apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: mqtt-client
+ namespace: azure-iot-operations
+
+apiVersion: v1
kind: Pod metadata: name: mqtt-client
metadata:
# Otherwise use the long hostname: aio-broker.azure-iot-operations.svc.cluster.local namespace: azure-iot-operations spec:
- # Use the "mqtt-client" service account which comes with default deployment
+ # Use the "mqtt-client" service account from above
# Otherwise create it with `kubectl create serviceaccount mqtt-client -n azure-iot-operations` serviceAccountName: mqtt-client containers:
spec:
expirationSeconds: 86400 - name: trust-bundle configMap:
- name: aio-ca-trust-bundle-test-only # Default root CA cert
+ name: azure-iot-operations-aio-ca-trust-bundle # Default root CA cert
``` Apply the deployment file with kubectl.
iot-operations Howto Develop Dapr Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-develop-dapr-apps.md
The following definition components might require customization to your specific
# Certificate chain for Dapr to validate the MQTT broker - name: aio-ca-trust-bundle configMap:
- name: aio-ca-trust-bundle-test-only
+ name: azure-iot-operations-aio-ca-trust-bundle
containers: # Container for the Dapr application
iot-operations Howto Develop Mqttnet Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/howto-develop-mqttnet-apps.md
spec:
# Certificate chain for the application to validate the MQTT broker - name: aio-ca-trust-bundle configMap:
- name: aio-ca-trust-bundle-test-only
+ name: azure-iot-operations-aio-ca-trust-bundle
containers: - name: mqtt-client-dotnet
iot-operations Tutorial Event Driven With Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/create-edge-apps/tutorial-event-driven-with-dapr.md
To start, create a yaml file that uses the following definitions:
# Certificate chain for Dapr to validate the MQTT broker - name: aio-ca-trust-bundle configMap:
- name: aio-ca-trust-bundle-test-only
+ name: azure-iot-operations-aio-ca-trust-bundle
containers: - name: mq-event-driven-dapr
To verify the MQTT bridge is working, deploy an MQTT client to the cluster.
expirationSeconds: 86400 - name: aio-ca-trust-bundle configMap:
- name: aio-ca-trust-bundle-test-only
+ name: azure-iot-operations-aio-ca-trust-bundle
``` 1. Apply the deployment file with kubectl:
iot-operations Howto Test Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-test-connection.md
The first option is to connect from within the cluster. This option uses the def
```yaml apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ name: mqtt-client
+ namespace: azure-iot-operations
+
+ apiVersion: v1
kind: Pod metadata: name: mqtt-client
The first option is to connect from within the cluster. This option uses the def
# Otherwise use the long hostname: aio-broker.azure-iot-operations.svc.cluster.local namespace: azure-iot-operations spec:
- # Use the "mqtt-client" service account which comes with default deployment
+ # Use the "mqtt-client" service account created from above
# Otherwise create it with `kubectl create serviceaccount mqtt-client -n azure-iot-operations` serviceAccountName: mqtt-client containers:
The first option is to connect from within the cluster. This option uses the def
Since the broker uses TLS, the client must trust the broker's TLS certificate chain. You need to configure the client to trust the root CA certificate used by the broker.
-To use the default root CA certificate, download it from the `aio-ca-trust-bundle-test-only` ConfigMap:
+To use the default root CA certificate, download it from the `azure-iot-operations-aio-ca-trust-bundle` ConfigMap:
```bash
-kubectl get configmap aio-ca-trust-bundle-test-only -n azure-iot-operations -o jsonpath='{.data.ca\.crt}' > ca.crt
+kubectl get configmap azure-iot-operations-aio-ca-trust-bundle -n azure-iot-operations -o jsonpath='{.data.ca\.crt}' > ca.crt
``` Use the downloaded `ca.crt` file to configure your client to trust the broker's TLS certificate chain.
iot Concepts Eclipse Threadx Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/concepts-eclipse-threadx-security-practices.md
- Title: Eclipse ThreadX security guidance for embedded devices
-description: Learn best practices for developing secure applications on embedded devices when you use Eclipse ThreadX.
---- Previously updated : 04/08/2024--
-# Develop secure embedded applications with Eclipse ThreadX
-
-This article offers guidance on implementing security for IoT devices that run Eclipse ThreadX and connect to Azure IoT services. Eclipse ThreadX is a real-time operating system (RTOS) for embedded devices. It includes a networking stack and middleware and helps you securely connect your application to the cloud.
-
-The security of an IoT application depends on your choice of hardware and how your application implements and uses security features. Use this article as a starting point to understand the main issues for further investigation.
-
-## Microsoft security principles
-
-When you design IoT devices, we recommend an approach based on the principle of *Zero Trust*. As a prerequisite to this article, read [Zero Trust: Cyber security for IoT](https://azure.microsoft.com/mediahandler/files/resourcefiles/zero-trust-cybersecurity-for-the-internet-of-things/Zero%20Trust%20Security%20Whitepaper_4.30_3pm.pdf). This brief paper outlines categories to consider when you implement security across an IoT ecosystem. Device security is emphasized.
-
-The following sections discuss the key components for cryptographic security.
--- **Strong identity:** Devices need a strong identity that includes the following technology solutions:-
- - **Hardware root of trust**: This strong hardware-based identity should be immutable and backed by hardware isolation and protection mechanisms.
- - **Passwordless authentication**: This type of authentication is often achieved by using X.509 certificates and asymmetric cryptography, where private keys are secured and isolated in hardware. Use passwordless authentication for the device identity in onboarding or attestation scenarios and the device's operational identity with other cloud services.
- - **Renewable credentials**: Secure the device's operational identity by using renewable, short-lived credentials. X.509 certificates backed by a secure public key infrastructure (PKI) with a renewal period appropriate for the device's security posture provide an excellent solution.
--- **Least-privileged access:** Devices should enforce least-privileged access control on local resources across workloads. For example, a firmware component that reports battery level shouldn't be able to access a camera component.-- **Continual updates**: A device should enable the over-the-air (OTA) feature, such as the [Device Update for IoT Hub](../iot-hub-device-update/device-update-azure-real-time-operating-system.md) to push the firmware that contains the patches or bug fixes.-- **Security monitoring and responses**: A device should be able to proactively report the security postures for the solution builder to monitor the potential threats for a large number of devices. You can use [Microsoft Defender for IoT](../defender-for-iot/device-builders/concept-rtos-security-module.md) for that purpose.-
-## Embedded security components: Cryptography
-
-Cryptography is a foundation of security in networked devices. Networking protocols such as Transport Layer Security (TLS) rely on cryptography to protect and authenticate information that travels over a network or the public internet.
-
-A secure IoT device that connects to a server or cloud service by using TLS or similar protocols requires strong cryptography with protection for keys and secrets that are based in hardware. Most other security mechanisms provided by those protocols are built on cryptographic concepts. Proper cryptographic support is the most critical consideration when you develop a secure connected IoT device.
-
-The following sections discuss the key components for cryptographic security.
-
-### True random hardware-based entropy source
-
-Any cryptographic application using TLS or cryptographic operations that require random values for keys or secrets must have an approved random entropy source. Without proper true randomness, statistical methods can be used to derive keys and secrets much faster than brute-force attacks, weakening otherwise strong cryptography.
-
-Modern embedded devices should support some form of cryptographic random number generator (CRNG) or "true" random number generator (TRNG). CRNGs and TRNGs are used to feed the random number generator that's passed into a TLS application.
-
-Hardware random number generators (HRNGs) supply some of the best sources of entropy. HRNGs typically generate values based on statistically random noise signals generated in a physical process rather than from a software algorithm.
-
-Government agencies and standards bodies around the world provide guidelines for random number generators. Some examples are the National Institute of Standards and Technology (NIST) in the US, the National Cybersecurity Agency of France, and the Federal Office for Information Security in Germany.
-
-**Hardware**: True entropy can only come from hardware sources. There are various methods to obtain cryptographic randomness, but all require physical processes to be considered secure.
-
-**Eclipse ThreadX**: Eclipse ThreadX uses random numbers for cryptography and TLS. For more information, see the user guide for each protocol in the [Eclipse ThreadX NetX Duo documentation](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/netx-duo/index.md).
-
-**Application**: You must provide a random number function and link it into your application, including Eclipse ThreadX.
-
-> [!IMPORTANT]
-> The C library function `rand()` does *not* use a hardware-based RNG by default. It's critical to assure that a proper random routine is used. The setup is specific to your hardware platform.
-
-### Real-time capability
-
-Real-time capability is primarily needed for checking the expiration date of X.509 certificates. TLS also uses timestamps as part of its session negotiation. Certain applications might require accurate time reporting. Options for obtaining accurate time include:
--- A real-time clock (RTC) device.-- The Network Time Protocol (NTP) to obtain time over a network.-- A Global Positioning System (GPS), which includes timekeeping.-
-> [!IMPORTANT]
-> Accurate time is nearly as critical as a TRNG for secure applications that use TLS and X.509.
-
-Many devices use a hardware RTC backed by synchronization over a network service or GPS. Devices might also rely solely on an RTC or on a network service or GPS. Regardless of the implementation, take measures to prevent drift.
-
-You also need to protect hardware components from tampering. And you need to guard against spoofing attacks when you use network services or GPS. If an attacker can spoof time, they can induce your device to accept expired certificates.
-
-**Hardware**: If you implement a hardware RTC and NTP or other network-based solutions are unavailable for syncing, the RTC should:
--- Be accurate enough for certificate expiration checks of an hour resolution or better.-- Be securely updatable or resistant to drift over the lifetime of the device.-- Maintain time across power failures or resets.-
-An invalid time disrupts all TLS communication. The device might even be rendered unreachable.
-
-**Eclipse ThreadX**: Eclipse ThreadX TLS uses time data for several security-related functions. You must provide a function for retrieving time data from the RTC or network. For more information, see the [NetX Duo secure TLS user guide](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/netx-duo/netx-duo-secure-tls/chapter1.md).
-
-**Application**: Depending on the time source used, your application might be required to initialize the functionality so that TLS can properly obtain the time information.
-
-### Use approved cryptographic routines with strong key sizes
-
-Many cryptographic routines are available today. When you design an application, research the cryptographic routines that you'll need. Choose the strongest and largest keys possible. Look to NIST or other organizations that provide guidance on appropriate cryptography for different applications. Consider these factors:
--- Choose key sizes that are appropriate for your application. Rivest-Shamir-Adleman (RSA) encryption is still acceptable in some organizations, but only if the key is 2048 bits or larger. For the Advanced Encryption Standard (AES), minimum key sizes of 128 bits are often required.-- Choose modern, widely accepted algorithms. Choose cipher modes that provide the highest level of security available for your application.-- Avoid using algorithms that are considered obsolete like the Data Encryption Standard and the Message Digest Algorithm 5.-- Consider the lifetime of your application. Adjust your choices to account for continued reduction in the security of current routines and key sizes.-- Consider making key sizes and algorithms updatable to adjust to changing security requirements.-- Use constant-time cryptographic techniques whenever possible to mitigate timing attack vulnerabilities.-
-**Hardware**: If you use hardware-based cryptography, your choices might be limited. Choose hardware that exceeds your minimum cryptographic and security needs. Use the strongest routines and keys available on that platform.
-
-**Eclipse ThreadX**: Eclipse ThreadX provides drivers for select cryptographic hardware platforms and software implementations for certain routines. Adding new routines and key sizes is straightforward.
-
-**Application**: If your application requires cryptographic operations, use the strongest approved routines possible.
-
-### Hardware-based cryptography acceleration
-
-Cryptography implemented in hardware for acceleration is there to unburden CPU cycles. It almost always requires software that applies it to achieve security goals. Timing attacks exploit the duration of a cryptographic operation to derive information about a secret key.
-
-When you perform cryptographic operations in constant time, regardless of the key or data properties, hardware cryptographic peripherals prevent this kind of attack. Every platform is likely to be different. There's no accepted standard for cryptographic hardware. Exceptions are the accepted cryptographic algorithms like AES and RSA.
-
-> [!IMPORTANT]
-> Hardware cryptographic acceleration doesn't necessarily equate to enhanced security. For example:
->
-> - Some cryptographic accelerators implement only the Electronic Codebook (ECB) mode of the cipher. You must implement more secure modes like Galois/Counter Mode, Counter with CBC-MAC, or Cipher Block Chaining (CBC). ECB isn't semantically secure.
->
-> - Cryptographic accelerators often leave key protection to the developer.
->
-
-Combine hardware cryptography acceleration that implements secure cipher modes with hardware-based protection for keys. The combination provides a higher level of security for cryptographic operations.
-
-**Hardware**: There are few standards for hardware cryptographic acceleration, so each platform varies in available functionality. For more information, see with your microcontroller unit (MCU) vendor.
-
-**Eclipse ThreadX**: Eclipse ThreadX provides drivers for select cryptographic hardware platforms. For more information on hardware-based cryptography, check your Eclipse ThreadX cryptography documentation.
-
-**Application**: If your application requires cryptographic operations, make use of all hardware-based cryptography that's available.
-
-## Embedded security components: Device identity
-
-In IoT systems, the notion that each endpoint represents a unique physical device challenges some of the assumptions that are built into the modern internet. As a result, a secure IoT device must be able to uniquely identify itself. If not, an attacker could imitate a valid device to steal data, send fraudulent information, or tamper with device functionality.
-
-Confirm that each IoT device that connects to a cloud service identifies itself in a way that can't be easily bypassed.
-
-The following sections discuss the key security components for device identity.
-
-### Unique verifiable device identifier
-
-A unique device identifier is known as a device ID. It allows a cloud service to verify the identity of a specific physical device. It also verifies that the device belongs to a particular group. A device ID is the digital equivalent of a physical serial number. It must be globally unique and protected. If the device ID is compromised, there's no way to distinguish between the physical device it represents and a fraudulent client.
-
-In most modern connected devices, the device ID is tied to cryptography. For example:
--- It might be a private-public key pair, where the private key is globally unique and associated only with the device.-- It might be a private-public key pair, where the private key is associated with a set of devices and is used in combination with another identifier that's unique to the device.-- It might be cryptographic material that's used to derive private keys unique to the device.-
-Regardless of implementation, the device ID and any associated cryptographic material must be hardware protected. For example, use a hardware security module (HSM).
-
-The device ID can be used for client authentication with a cloud service or server. It's best to split the device ID from operational certificates typically used for such purposes. To lessen the attack surface, operational certificates should be short-lived. The public portion of the device ID shouldn't be widely distributed. Instead, the device ID can be used to sign or derive private keys associated with operational certificates.
-
-> [!NOTE]
-> A device ID is tied to a physical device, usually in a cryptographic manner. It provides a root of trust. It can be thought of as a "birth certificate" for the device. A device ID represents a unique identity that applies to the entire lifespan of the device.
->
-> Other forms of IDs, such as for attestation or operational identification, are updated periodically, like a driver's license. They frequently identify the owner. Security is maintained by requiring periodic updates or renewals.
->
-> Just like a birth certificate is used to get a driver's license, the device ID is used to get an operational ID. Within IoT, both the device ID and operational ID are frequently provided as X.509 certificates. They use the associated private keys to cryptographically tie the IDs to the specific hardware.
-
-**Hardware**: Tie a device ID to the hardware. It must not be easily replicated. Require hardware-based cryptographic features like those found in an HSM. Some MCU devices might provide similar functionality.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX features use device IDs. Communication to cloud services via TLS might require an X.509 certificate that's tied to the device ID.
-
-**Application**: No specific features are required for user applications. A unique device ID might be required for certain applications.
-
-### Certificate management
-
-If your device uses a certificate from a PKI, your application needs to update those certificates periodically. The need to update is true for the device and any trusted certificates used for verifying servers. More frequent updates improve the overall security of your application.
-
-**Hardware**: Tie all certificate private keys to your device. Ideally, the key is generated internally by the hardware and is never exposed to your application. Mandate the ability to generate X.509 certificate requests on the device.
-
-**Eclipse ThreadX**: Eclipse ThreadX TLS provides basic X.509 certificate support. Certificate revocation lists (CRLs) and policy parsing are supported. They require manual management in your application without a supporting SDK.
-
-**Application**: Make use of CRLs or Online Certificate Status Protocol to validate that certificates haven't been revoked by your PKI. Make sure to enforce X.509 policies, validity periods, and expiration dates required by your PKI.
-
-### Attestation
-
-Some devices provide a secret key or value that's uniquely loaded into each specific device. Usually, permanent fuses are used. The secret key or value is used to check the ownership or status of the device. Whenever possible, it's best to use this hardware-based value, though not necessarily directly. Use it as part of any process where the device needs to identify itself to a remote host.
-
-This value is coupled with a secure boot mechanism to prevent fraudulent use of the secret ID. Depending on the cloud services being used and their PKI, the device ID might be tied to an X.509 certificate. Whenever possible, the attestation device ID should be separate from "operational" certificates used to authenticate a device.
-
-Device status in attestation scenarios can include information to help a service determine the device's state. Information can include firmware version and component health. It can also include life-cycle state, for example, running versus debugging. Device attestation is often involved in OTA firmware update protocols to ensure that the correct updates are delivered to the intended device.
-
-> [!NOTE]
-> "Attestation" is distinct from "authentication." Attestation uses an external authority to determine whether a device belongs to a particular group by using cryptography. Authentication uses cryptography to verify that a host (device) owns a private key in a challenge-response process, such as the TLS handshake.
-
-**Hardware**: The selected hardware must provide functionality to provide a secret unique identifier. This functionality is tied into cryptographic hardware like a TPM or HSM. A specific API is required for attestation services.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX functionality is required.
-
-**Application**: The user application might be required to implement logic to tie the hardware features to whatever attestation the chosen cloud service requires.
-
-## Embedded security components: Memory protection
-
-Many successful hacking attacks use buffer overflow errors to gain access to privileged information or even to execute arbitrary code on a device. Numerous technologies and languages have been created to battle overflow problems. Because system-level embedded development requires low-level programming, most embedded development is done by using C or assembly language.
-
-These languages lack modern memory protection schemes but allow for less restrictive memory manipulation. Because built-in protection is lacking, you must be vigilant about memory corruption. The following recommendations make use of functionality provided by some MCU platforms and Eclipse ThreadX itself to help mitigate the effect of overflow errors on security.
-
-The following sections discuss the key security components for memory protection.
-
-### Protection against reading or writing memory
-
-An MCU might provide a latching mechanism that enables a tamper-resistant state. It works either by preventing reading of sensitive data or by locking areas of memory from being overwritten. This technology might be part of, or in addition to, a Memory Protection Unit (MPU) or a Memory Management Unit (MMU).
-
-**Hardware**: The MCU must provide the appropriate hardware and interface to use memory protection.
-
-**Eclipse ThreadX**: If the memory protection mechanism isn't an MMU or MPU, Eclipse ThreadX doesn't require any specific support. For more advanced memory protection, you can use Eclipse ThreadX Modules for detailed control over memory spaces for threads and other RTOS control structures.
-
-**Application**: Application developers might be required to enable memory protection when the device is first booted. For more information, see secure boot documentation. For simple mechanisms that aren't MMU or MPU, the application might place sensitive data like certificates into the protected memory region. The application can then access the data by using the hardware platform APIs.
-
-### Application memory isolation
-
-If your hardware platform has an MMU or MPU, those features can be used to isolate the memory spaces used by individual threads or processes. Sophisticated mechanisms like Trust Zone also provide protections beyond what a simple MPU can do. This isolation can thwart attackers from using a hijacked thread or process to corrupt or view memory in another thread or process.
-
-**Hardware**: The MCU must provide the appropriate hardware and interface to use memory protection.
-
-**Eclipse ThreadX**: Eclipse ThreadX allows for ThreadX Modules that are built independently or separately and are provided with their own instruction and data area addresses at runtime. Memory protection can then be enabled so that a context switch to a thread in a module disallows code from accessing memory outside of the assigned area.
-
-> [!NOTE]
-> TLS and Message Queuing Telemetry Transport (MQTT) aren't yet supported from ThreadX Modules.
-
-**Application**: You might be required to enable memory protection when the device is first booted. For more information, see secure boot and ThreadX Modules documentation. Use of ThreadX Modules might introduce more memory and CPU overhead.
-
-### Protection against execution from RAM
-
-Many MCU devices contain an internal "program flash" where the application firmware is stored. The application code is sometimes run directly from the flash hardware and uses the RAM only for data.
-
-If the MCU allows execution of code from RAM, look for a way to disable that feature. Many attacks try to modify the application code in some way. If the attacker can't execute code from RAM, it's more difficult to compromise the device.
-
-Placing your application in flash makes it more difficult to change. Flash technology requires an unlock, erase, and write process. Although flash increases the challenge for an attacker, it's not a perfect solution. To provide for renewable security, the flash needs to be updatable. A read-only code section is better at preventing attacks on executable code, but it prevents updating.
-
-**Hardware**: Presence of a program flash used for code storage and execution. If running in RAM is required, consider using an MMU or MPU, if available. Use of an MMU or MPU protects from writing to the executable memory space.
-
-**Eclipse ThreadX**: No specific features.
-
-**Application**: The application might need to disable flash writing during secure boot depending on the hardware.
-
-### Memory buffer checking
-
-Avoiding buffer overflow problems is a primary concern for code running on connected devices. Applications written in unmanaged languages like C are susceptible to buffer overflow issues. Safe coding practices can alleviate some of the problems.
-
-Whenever possible, try to incorporate buffer checking into your application. You might be able to make use of built-in features of the selected hardware platform, third-party libraries, and tools. Even features in the hardware itself can provide a mechanism for detecting or preventing overflow conditions.
-
-**Hardware**: Some platforms might provide memory checking functionality. Consult with your MCU vendor for more information.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX functionality is provided.
-
-**Application**: Follow good coding practice by requiring applications to always supply buffer size or the number of elements in an operation. Avoid relying on implicit terminators such as NULL. With a known buffer size, the program can check bounds during memory or array operations, such as when calling APIs like `memcpy`. Try to use safe versions of APIs like `memcpy_s`.
-
-### Enable runtime stack checking
-
-Preventing stack overflow is a primary security concern for any application. Whenever possible, use Eclipse ThreadX stack checking features. These features are covered in the Eclipse ThreadX user guide.
-
-**Hardware**: Some MCU platform vendors might provide hardware-based stack checking. Use any functionality that's available.
-
-**Eclipse ThreadX**: Eclipse ThreadX ThreadX provides some stack checking functionality that can be optionally enabled at compile time. For more information, see the [ThreadX documentation](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/threadx/index.md).
-
-**Application**: Certain compilers such as IAR also have "stack canary" support that helps to catch stack overflow conditions. Check your tools to see what options are available and enable them if possible.
-
-## Embedded security components: Secure boot and firmware update
-
- An IoT device, unlike a traditional embedded device, is often connected over the internet to a cloud service for monitoring and data gathering. As a result, it's nearly certain that the device will be probed in some way. Probing can lead to an attack if a vulnerability is found.
-
-A successful attack might result in the discovery of an unknown vulnerability that compromises the device. Other devices of the same kind could also be compromised. For this reason, it's critical that an IoT device can be updated quickly and easily. The firmware image itself must be verified because if an attacker can load a compromised image onto a device, that device is lost.
-
-The solution is to pair a secure boot mechanism with remote firmware update capability. This capability is also called an OTA update. Secure boot verifies that a firmware image is valid and trusted. An OTA update mechanism allows updates to be quickly and securely deployed to the device.
-
-The following sections discuss the key security components for secure boot and firmware update.
-
-### Secure boot
-
-It's vital that a device can prove it's running valid firmware upon reset. Secure boot prevents the device from running untrusted or modified firmware images. Secure boot mechanisms are tied to the hardware platform. They validate the firmware image against internally protected measurements before loading the application. If validation fails, the device refuses to boot the corrupted image.
-
-**Hardware**: MCU vendors might provide their own proprietary secure boot mechanisms because secure boot is tied to the hardware.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX functionality is required for secure boot. Third-party commercial vendors offer secure boot products.
-
-**Application**: The application might be affected by secure boot if OTA updates are enabled. The application itself might need to be responsible for retrieving and loading new firmware images. OTA update is tied to secure boot. You need to build the application with versioning and code-signing to support updates with secure boot.
-
-### Firmware or OTA update
-
-An OTA update, sometimes referred to as a firmware update, involves updating the firmware image on your device to a new version to add features or fix bugs. OTA update is important for security because vulnerabilities that are discovered must be patched as soon as possible.
-
-> [!NOTE]
-> OTA updates *must* be tied to secure boot and code signing. Otherwise, it's impossible to validate that new images aren't compromised.
-
-**Hardware**: Various implementations for OTA update exist. Some MCU vendors provide OTA update solutions that are tied to their hardware. Some OTA update mechanisms can also use extra storage space, for example, flash. The storage space is used for rollback protection and to provide uninterrupted application functionality during update downloads.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX functionality is required for OTA updates.
-
-**Application**: Third-party software solutions for OTA update also exist and might be used by an Eclipse ThreadX application. You need to build the application with versioning and code-signing to support updates with secure boot.
-
-### Roll back or downgrade protection
-
-Secure boot and OTA update must work together to provide an effective firmware update mechanism. Secure boot must be able to ingest a new firmware image from the OTA mechanism and mark the new version as being trusted.
-
-The OTA and secure boot mechanism must also protect against downgrade attacks. If an attacker can force a rollback to an earlier trusted version that has known vulnerabilities, the OTA and secure boot fails to provide proper security.
-
-Downgrade protection also applies to revoked certificates or credentials.
-
-**Hardware**: No specific hardware functionality is required, except as part of secure boot, OTA, or certificate management.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX functionality is required.
-
-**Application**: No specific application support is required, depending on requirements for OTA, secure boot, and certificate management.
-
-### Code signing
-
-Make use of any features for signing and verifying code or credential updates. Code signing involves generating a cryptographic hash of the firmware or application image. That hash is used to verify the integrity of the image received by the device. Typically, a trusted root X.509 certificate is used to verify the hash signature. This process is tied into secure boot and OTA update mechanisms.
-
-**Hardware**: No specific hardware functionality is required except as part of OTA update or secure boot. Use hardware-based signature verification if it's available.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX functionality is required.
-
-**Application**: Code signing is tied to secure boot and OTA update mechanisms to verify the integrity of downloaded firmware images.
-
-## Embedded security components: Protocols
-
-The following sections discuss the key security components for protocols.
-
-### Use the latest version of TLS possible for connectivity
-
-Support current TLS versions:
--- TLS 1.2 is currently (as of 2022) the most widely used TLS version.-- TLS 1.3 is the latest TLS version. Finalized in 2018, TLS 1.3 adds many security and performance enhancements. It isn't widely deployed. If your application can support TLS 1.3, we recommend it for new applications.-
-> [!NOTE]
-> TLS 1.0 and TLS 1.1 are obsolete protocols. Don't use them for new application development. They're disabled by default in Eclipse ThreadX.
-
-**Hardware**: No specific hardware requirements.
-
-**Eclipse ThreadX**: TLS 1.2 is enabled by default. TLS 1.3 support must be explicitly enabled in Eclipse ThreadX because TLS 1.2 is still the de-facto standard.
-
-Also ensure the below corresponding NetX Duo Secure configurations are set. Refer to the [list of configurations](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/netx-duo/netx-duo-secure-tls/chapter2.md) for details.
-
-```c
-/* Enables secure session renegotiation extension */
-#define NX_SECURE_TLS_DISABLE_SECURE_RENEGOTIATION 0
-
-/* Disables protocol version downgrade for TLS client. */
-#define NX_SECURE_TLS_DISABLE_PROTOCOL_VERSION_DOWNGRADE
-```
-
-When setting up NetX Duo TLS, use [`nx_secure_tls_session_time_function_set()`](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/netx-duo/netx-duo-secure-tls/chapter4.md#nx_secure_tls_session_time_function_set) to set a timing function that returns the current GMT in UNIX 32-bit format to enable checking of the certification expirations.
-
-**Application**: To use TLS with cloud services, a certificate is required. The certificate must be managed by the application.
-
-### Use X.509 certificates for TLS authentication
-
-X.509 certificates are used to authenticate a device to a server and a server to a device. A device certificate is used to prove the identity of a device to a server.
-
-Trusted root CA certificates are used by a device to authenticate a server or service to which it connects. The ability to update these certificates is critical. Certificates can be compromised and have limited lifespans.
-
-Use hardware-based X.509 certificates with TLS mutual authentication and a PKI with active monitoring of certificate status for the highest level of security.
-
-**Hardware**: No specific hardware requirements.
-
-**Eclipse ThreadX**: Eclipse ThreadX TLS provides basic X.509 authentication through TLS and some user APIs for further processing.
-
-**Application**: Depending on requirements, the application might have to enforce X.509 policies. CRLs should be enforced to ensure revoked certificates are rejected.
-
-### Use the strongest cryptographic options and cipher suites for TLS
-
-Use the strongest cryptography and cipher suites available for TLS. You need the ability to update TLS and cryptography. Over time, certain cipher suites and TLS versions might become compromised or discontinued.
-
-**Hardware**: If cryptographic acceleration is available, use it.
-
-**Eclipse ThreadX**: Eclipse ThreadX TLS provides hardware drivers for select devices that support cryptography in hardware. For routines not supported in hardware, the [NetX Duo cryptography library](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/netx-duo/netx-duo-crypto/chapter1.md) is designed specifically for embedded systems. A FIPS 140-2 certified library that uses the same code base is also available.
-
-**Application**: Applications that use TLS should choose cipher suites that use hardware-based cryptography when it's available. They should also use the strongest keys available. Note the following TLS Cipher Suites, supported in TLS 1.2, don't provide forward secrecy:
--- **TLS_RSA_WITH_AES_128_CBC_SHA256**-- **TLS_RSA_WITH_AES_256_CBC_SHA256**-
-Consider using **TLS_RSA_WITH_AES_128_GCM_SHA256** if available.
-
-SHA1 (128-bit) is no longer considered cryptographically secure. Avoid using cipher suites that engage SHA1 (such as **TLS_RSA_WITH_AES_128_CBC_SHA**) if possible.
-
-AES/CBC mode is susceptible to Lucky-13 attacks. Application shall use AES-GCM (such as **TLS_RSA_WITH_AES_128_GCM_SHA256**).
-
-### TLS mutual certificate authentication
-
-When you use X.509 authentication in TLS, opt for mutual certificate authentication. With mutual authentication, both the server and client must provide a verifiable certificate for identification.
-
-Use hardware-based X.509 certificates with TLS mutual authentication and a PKI with active monitoring of certificate status for the highest level of security.
-
-**Hardware**: No specific hardware requirements.
-
-**Eclipse ThreadX**: Eclipse ThreadX TLS provides support for mutual certificate authentication in both TLS server and client applications. For more information, see the [NetX Duo secure TLS documentation](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/netx-duo/netx-duo-secure-tls/chapter1.md).
-
-**Application**: Applications that use TLS should always default to mutual certificate authentication whenever possible. Mutual authentication requires TLS clients to have a device certificate. Mutual authentication is an optional TLS feature, but you should use it when possible.
-
-### Only use TLS-based MQTT
-
-If your device uses MQTT for cloud communication, only use MQTT over TLS.
-
-**Hardware**: No specific hardware requirements.
-
-**Eclipse ThreadX**: Eclipse ThreadX provides MQTT over TLS as a default configuration.
-
-**Application**: Applications that use MQTT should only use TLS-based MQTT with mutual certificate authentication.
-
-## Embedded security components: Application design and development
-
-The following sections discuss the key security components for application design and development.
-
-### Disable debugging features
-
-For development, most MCU devices use a JTAG interface or similar interface to provide information to debuggers or other applications. If you leave a debugging interface enabled on your device, you give an attacker an easy door into your application. Make sure to disable all debugging interfaces. Also remove associated debugging code from your application before deployment.
-
-**Hardware**: Some devices might have hardware support to disable debugging interfaces permanently or the interface might be able to be removed physically from the device. Removing the interface physically from the device does *not* mean the interface is disabled. You might need to disable the interface on boot, for example, during a secure boot process. Always disable the debugging interface in production devices.
-
-**Eclipse ThreadX**: Not applicable.
-
-**Application**: If the device doesn't have a feature to permanently disable debugging interfaces, the application might have to disable those interfaces on boot. Disable debugging interfaces as early as possible in the boot process. Preferably, disable those interfaces during a secure boot before the application is running.
-
-### Watchdog timers
-
-When available, an IoT device should use a watchdog timer to reset an unresponsive application. Resetting the device when time runs out limits the amount of time an attacker might have to execute an exploit.
-
-The watchdog can be reinitialized by the application. Some basic integrity checks can also be done like looking for code executing in RAM, checksums on data, and identity checks. If an attacker doesn't account for the watchdog timer reset while trying to compromise the device, the device would reboot into a (theoretically) clean state. A secure boot mechanism would be required to verify the identity of the application image.
-
-**Hardware**: Watchdog timer support in hardware, secure boot functionality.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX functionality is required.
-
-**Application**: Watchdog timer management. For more information, see the device hardware platform documentation.
-
-### Remote error logging
-
-Use cloud resources to record and analyze device failures remotely. Aggregate errors to find patterns that indicate possible vulnerabilities or attacks.
-
-**Hardware**: No specific hardware requirements.
-
-**Eclipse ThreadX**: No specific Eclipse ThreadX requirements. Consider logging Eclipse ThreadX API return codes to look for specific problems with lower-level protocols that might indicate problems. Examples include TLS alert causes and TCP failures.
-
-**Application**: Use logging libraries and your cloud service's client SDK to push error logs to the cloud. In the cloud, logs can be stored and analyzed safely without using valuable device storage space. Integration with [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) provides this functionality and more. Microsoft Defender for IoT provides agentless monitoring of devices in an IoT solution. Monitoring can be enhanced by including the [Microsoft Defender for IOT micro-agent for Eclipse ThreadX](../defender-for-iot/device-builders/iot-security-azure-rtos.md) on your device. For more information, see the [Runtime security monitoring and threat detection](#runtime-security-monitoring-and-threat-detection) recommendation.
-
-Microsoft Defender for IoT provides agentless monitoring of devices in an IoT solution. Monitoring can be enhanced by including the [Microsoft Defender for IOT micro-agent for Eclipse ThreadX](../defender-for-iot/device-builders/iot-security-azure-rtos.md) on your device. For more information, see the [Runtime security monitoring and threat detection](#runtime-security-monitoring-and-threat-detection) recommendation.
-
-### Disable unused protocols and features
-
-RTOS and MCU-based applications typically have a few dedicated functions. This feature is in sharp contrast to general-purpose computing machines running higher-level operating systems, such as Windows and Linux. These machines enable dozens or hundreds of protocols and features by default.
-
-When you design an RTOS MCU application, look closely at what networking protocols are required. Every protocol that's enabled represents a different avenue for attackers to gain a foothold within the device. If you donΓÇÖt need a feature or protocol, don't enable it.
-
-**Hardware**: No specific hardware requirements. If the platform allows unused peripherals and ports to be disabled, use that functionality to reduce your attack surface.
-
-**Eclipse ThreadX**: Eclipse ThreadX has a "disabled by default" philosophy. Only enable protocols and features that are required for your application. Resist the temptation to enable features "just in case."
-
-**Application**: When you design your application, try to reduce the feature set to the bare minimum. Fewer features make an application easier to analyze for security vulnerabilities. Fewer features also reduce your application attack surface.
-
-### Use all possible compiler and linker security features
-
-Modern compilers and linkers provide many options for more security at build time. When you build your application, use as many compiler- and linker-based options as possible. They'll improve your application with proven security mitigations. Some options might affect size, performance, or RTOS functionality. Be careful when you enable certain features.
-
-**Hardware**: No specific hardware requirements. Your hardware platform might support security features that can be enabled during the compiling or linking processes.
-
-**Eclipse ThreadX**: As an RTOS, some compiler-based security features might interfere with the real-time guarantees of Eclipse ThreadX. Consider your RTOS needs when you select compiler options and test them thoroughly.
-
-**Application**: If you use other development tools, consult your documentation for appropriate options. In general, the following guidelines should help you build a more secure configuration:
--- Enable maximum error and warning levels for all builds. Production code should compile and link cleanly with no errors or warnings.-- Enable all runtime checking that's available. Examples include stack checking, buffer overflow detection, Address Space Layout Randomization (ASLR), and integer overflow detection.-- Some tools and devices might provide options to place code in protected or read-only areas of memory. Make use of any available protection mechanisms to prevent an attacker from being able to run arbitrary code on your device. Making code read-only doesn't completely protect against arbitrary code execution, but it does help.-
-### Make sure memory access alignment is correct
-
-Some MCU devices permit unaligned memory access, but others don't. Consider the properties of your specific device when you develop your application.
-
-**Hardware**: Memory access alignment behavior is specific to your selected device.
-
-**Eclipse ThreadX**: For processors that do *not* support unaligned access, ensure that the macro `NX_CRYPTO_DISABLE_UNALIGNED_ACCESS` is defined. Failure to do so results in possible CPU faults during certain cryptographic operations.
-
-**Application**: In any memory operation like copy or move, consider the memory alignment behavior of your hardware platform.
-
-### Runtime security monitoring and threat detection
-
-Connected IoT devices might not have the necessary resources to implement all security features locally. With connection to the cloud, you can use remote security options to improve the security of your application. These options don't add significant overhead to the embedded device.
-
-**Hardware**: No specific hardware features required other than a network interface.
-
-**Eclipse ThreadX**: Eclipse ThreadX supports [Microsoft Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/).
-
-**Application**: The [Microsoft Defender for IOT micro-agent for Eclipse ThreadX](../defender-for-iot/device-builders/iot-security-azure-rtos.md) provides a comprehensive security solution for Eclipse ThreadX devices. The module provides security services via a small software agent that's built into your device's firmware and comes as part of Eclipse ThreadX. The service includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations that will help to improve the security hygiene of your devices. Whether you're using Eclipse ThreadX in combination with Azure Sphere or not, the Microsoft Defender for IoT micro-agent provides an extra layer of security that's built into the RTOS by default.
-
-## Eclipse ThreadX IoT application security checklist
-
-The previous sections detailed specific design considerations with descriptions of the necessary hardware, operating system, and application requirements to help mitigate security threats. This section provides a basic checklist of security-related issues to consider when you design and implement IoT applications with Eclipse ThreadX.
-
-This short list of measures is meant as a complement to, not a replacement for, the more detailed discussion in previous sections. You must perform a comprehensive analysis of the physical and cybersecurity threats posed by the environment your device will be deployed into. You also need to carefully consider and rigorously implement measures to mitigate those threats. The goal is to provide the highest possible level of security for your device.
-
-The service includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations to help improve the security hygiene of your devices.
-
-Whether you're using Eclipse ThreadX in combination with Azure Sphere or not, the Microsoft Defender for IoT micro-agent provides another layer of security that's built into the RTOS by default.
-
-### Security measures to take
--- Always use a hardware source of entropy (CRNG, TRNG based in hardware). Eclipse ThreadX uses a macro (`NX_RAND`) that allows you to define your random function.-- Always supply a real-time clock for calendar date and time to check certificate expiration.-- Use CRLs to validate certificate status. With Eclipse ThreadX TLS, a CRL is retrieved by the application and passed via a callback to the TLS implementation. For more information, see the [NetX Duo secure TLS user guide](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/netx-duo/netx-duo-secure-tls/chapter1.md).-- Use the X.509 "Key Usage" extension when possible to check for certificate acceptable uses. In Eclipse ThreadX, the use of a callback to access the X.509 extension information is required.-- Use X.509 policies in your certificates that are consistent with the services to which your device will connect. An example is ExtendedKeyUsage.-- Use approved cipher suites in the Eclipse ThreadX Crypto library:-
- - Supplied examples provide the required cipher suites to be compatible with TLS RFCs, but stronger cipher suites might be more suitable. Cipher suites include multiple ciphers for different TLS operations, so choose carefully. For example, using Elliptic-Curve Diffie-Hellman Ephemeral (ECDHE) might be preferable to RSA for key exchange, but the benefits can be lost if the cipher suite also uses RC4 for application data. Make sure every cipher in a cipher suite meets your security needs.
- - Remove cipher suites that aren't needed. Doing so saves space and provides extra protection against attack.
- - Use hardware drivers when applicable. Eclipse ThreadX provides hardware cryptography drivers for select platforms. For more information, see the [NetX Duo crypto documentation](https://github.com/eclipse-threadx/rtos-docs/blob/main/rtos-docs/netx-duo/netx-duo-crypto/chapter1.md).
--- Favor ephemeral public-key algorithms like ECDHE over static algorithms like classic RSA when possible. Public-key algorithms provide forward secrecy. TLS 1.3 *only* supports ephemeral cipher modes, so moving to TLS 1.3 when possible satisfies this goal.-- Make use of memory checking functionality like compiler and third-party memory checking tools and libraries like ThreadX stack checking.-- Scrutinize all input data for length/buffer overflow conditions. Be suspicious of any data that comes from outside a functional block like the device, thread, and even each function or method. Check it thoroughly with application logic. Some of the easiest vulnerabilities to exploit come from unchecked input data causing buffer overflows.-- Make sure code builds cleanly. All warnings and errors should be accounted for and scrutinized for vulnerabilities.-- Use static code analysis tools to determine if there are any errors in logic or pointer arithmetic. All errors can be potential vulnerabilities.-- Research fuzz testing, also known as "fuzzing," for your application. Fuzzing is a security-focused process where message parsing for incoming data is subjected to large quantities of random or semi-random data. The purpose is to observe the behavior when invalid data is processed. It's based on techniques used by hackers to discover buffer overflow and other errors that might be used in an exploit to attack a system.-- Perform code walk-through audits to look for confusing logic and other errors. If you can't understand a piece of code, it's possible that code contains vulnerabilities.-- Use an MPU or MMU when available and overhead is acceptable. An MPU or MMU helps to prevent code from executing from RAM and threads from accessing memory outside their own memory space. Use ThreadX Modules to isolate application threads from each other to prevent access across memory boundaries.-- Use watchdogs to prevent runaway code and to make attacks more difficult. They limit the window during which an attack can be executed.-- Consider safety and security certified code. Using certified code and certifying your own applications subjects your application to higher scrutiny and increases the likelihood of discovering vulnerabilities before the application is deployed. Formal certification might not be required for your device. Following the rigorous testing and review processes required for certification can provide enormous benefit.-
-### Security measures to avoid
--- Don't use the standard C-library `rand()` function because it doesn't provide cryptographic randomness. Consult your hardware documentation for a proper source of cryptographic entropy.-- Don't hard-code private keys or credentials like certificates, passwords, or usernames in your application. To provide a higher level of security, update private keys regularly. The actual schedule depends on several factors. Also, hard-coded values might be readable in memory or even in transit over a network if the firmware image isn't encrypted. The actual mechanism for updating keys and certificates depends on your application and the PKI being used.-- Don't use self-signed device certificates. Instead, use a proper PKI for device identification. Some exceptions might apply, but this rule is for most organizations and systems.-- Don't use any TLS extensions that aren't needed. Eclipse ThreadX TLS disables many features by default. Only enable features you need.-- Don't try to implement "security by obscurity." It's *not secure*. The industry is plagued with examples where a developer tried to be clever by obscuring or hiding code or algorithms. Obscuring your code or secret information like keys or passwords might prevent some intruders, but it won't stop a dedicated attacker. Obscured code provides a false sense of security.-- Don't leave unnecessary functionality enabled or unused network or hardware ports open. If your application doesn't need a feature, disable it. Don't fall into the trap of leaving a TCP port open just in case. When more ports are left open, it raises the risk that an exploit will go undetected. The interaction between different features can introduce new vulnerabilities.-- Don't leave debugging enabled in production code. If an attacker can plug in a JTAG debugger and dump the contents of RAM on your device, not much can be done to secure your application. Leaving a debugging port open is like leaving your front door open with your valuables lying in plain sight. Don't do it.-- Don't allow buffer overflows in your application. Many remote attacks start with a buffer overflow that's used to probe the contents of memory or inject malicious code to be executed. The best defense is to write defensive code. Double-check any input that comes from, or is derived from, sources outside the device like the network stack, display or GUI interface, and external interrupts. Handle the error gracefully. Use compiler, linker, and runtime system tools to detect and mitigate overflow problems.-- Don't put network packets on local thread stacks where an overflow can affect return addresses. This practice can lead to return-oriented programming vulnerabilities.-- Don't put buffers in program stacks. Allocate them statically whenever possible.-- Don't use dynamic memory and heap operations when possible. Heap overflows can be problematic because the layout of dynamically allocated memory, for example, from functions like `malloc()`, is difficult to predict. Static buffers can be more easily managed and protected.-- Don't embed function pointers in data packets where overflow can overwrite function pointers.-- Don't try to implement your own cryptography. Accepted cryptographic routines like elliptic curve cryptography (ECC) and AES were developed by experts in cryptography. These routines went through rigorous analysis over many years to prove their security. It's unlikely that any algorithm you develop on your own will have the security required to protect sensitive communications and data.-- Don't implement roll-your-own cryptography schemes. Simply using AES doesn't mean your application is secure. Protocols like TLS use various methods to mitigate well-known attacks, for example:-
- - Known plain-text attacks, which use known unencrypted data to derive information about encrypted data.
- - Padding oracles, which use modified cryptographic padding to gain access to secret data.
- - Predictable secrets, which can be used to break encryption.
-
- Whenever possible, try to use accepted security protocols like TLS when you secure your application.
-
-## Recommended security resources
--- [Zero Trust: Cyber security for IoT](https://azure.microsoft.com/mediahandler/files/resourcefiles/zero-trust-cybersecurity-for-the-internet-of-things/Zero%20Trust%20Security%20Whitepaper_4.30_3pm.pdf) provides an overview of Microsoft's approach to security across all aspects of an IoT ecosystem, with an emphasis on devices.-- [IoT Security Maturity Model](https://www.iiconsortium.org/smm.htm) proposes a standard set of security domains, subdomains, and practices and an iterative process you can use to understand, target, and implement security measures important for your device. This set of standards is directed to all levels of IoT stakeholders and provides a process framework for considering security in the context of a component's interactions in an IoT system.-- [Seven properties of highly secured devices](https://www.microsoft.com/research/publication/seven-properties-2nd-edition/), published by Microsoft Research, provides an overview of security properties that must be addressed to produce highly secure devices. The seven properties are hardware root of trust, defense in depth, small trusted computing base, dynamic compartments, passwordless authentication, error reporting, and renewable security. These properties are applicable to many embedded devices, depending on cost constraints, target application and environment.-- [PSA Certified 10 security goals explained](https://www.psacertified.org/blog/psa-certified-10-security-goals-explained/) discusses the Azure Resource Manager Platform Security Architecture (PSA). It provides a standardized framework for building secure embedded devices by using Resource Manager TrustZone technology. Microcontroller manufacturers can certify designs with the Resource Manager PSA Certified program giving a level of confidence about the security of applications built on Resource Manager technologies.-- [Common Criteria](https://www.commoncriteriaportal.org/) is an international agreement that provides standardized guidelines and an authorized laboratory program to evaluate products for IT security. Certification provides a level of confidence in the security posture of applications using devices that were evaluated by using the program guidelines.-- [Security Evaluation Standard for IoT Platforms (SESIP)](https://globalplatform.org/sesip/) is a standardized methodology for evaluating the security of connected IoT products and components.-- [FIPS 140-2/3](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government program that standardizes cryptographic algorithms and implementations used in US government and military applications. Along with documented standards, certified laboratories provide FIPS certification to guarantee specific cryptographic implementations adhere to regulations.
load-balancer Manage Admin State How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-admin-state-how-to.md
Previously updated : 05/30/2024 Last updated : 10/25/2024
You can use the Azure portal, Azure PowerShell, or Azure CLI to manage the admin
# [Azure portal](#tab/azureportal) -- Access to the Azure portal using [https://preview.portal.azure.com].
+- Access to the Azure portal.
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/)-- Self-registration of the feature name **SLBAllowAdminStateChangeForConnectionDraining** in your subscription. For information on registering the feature in your subscription, see [Register preview feature doc](../azure-resource-manager/management/preview-features.md). - An existing resource group for all resources. - Two or more existing [Virtual Machines](/azure/virtual-machines/windows/quick-create-portal). - An existing [standard load balancer](quickstart-load-balancer-standard-internal-portal.md) in the same subscription and virtual network as the virtual machines. - The load balancer should have a backend pool with health probes and load balancing rules attached.
-> [!IMPORTANT]
-> This feature is supported via Azure Portal Preview. To use this feature in Azure Portal, make sure you are using [Azure Portal Preview link] (https://preview.portal.azure.com)
# [Azure PowerShell](#tab/azurepowershell) - Access to the Azure portal. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/)-- Self-registration of the feature name **SLBAllowAdminStateChangeForConnectionDraining** in your subscription. For information on registering the feature in your subscription, see [Register preview feature doc](../azure-resource-manager/management/preview-features.md). - An existing resource group for all resources. - Existing [Virtual Machines](/azure/virtual-machines/windows/quick-create-powershell). - An existing [standard load balancer](quickstart-load-balancer-standard-internal-powershell.md) in the same subscription and virtual network as the virtual machine.
You can use the Azure portal, Azure PowerShell, or Azure CLI to manage the admin
- Access to the Azure portal. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/)-- Self-registration of the feature name **SLBAllowAdminStateChangeForConnectionDraining** in your subscription. For information on registering the feature in your subscription, see [Register preview feature doc](../azure-resource-manager/management/preview-features.md). - An existing resource group for all resources. - Existing [Virtual Machines](/azure/virtual-machines/windows/quick-create-cli). - An existing [standard load balancer](quickstart-load-balancer-standard-internal-cli.md) in the same subscription and virtual network as the virtual machine.
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
To compare and understand the differences between Basic and Standard SKU, see th
| **Scenario** | Equipped for load-balancing network layer traffic when high performance and ultra-low latency is needed. Routes traffic within and across regions, and to availability zones for high resiliency. | Equipped for small-scale applications that don't need high availability or redundancy. Not compatible with availability zones. | | **Backend type** | IP based, NIC based | NIC based | | **Protocol** | TCP, UDP | TCP, UDP |
-| **Backend pool endpoints** | Any virtual machines or virtual machine scale sets in a single virtual network | Virtual machines in a single availability set or virtual machine scale set |
+| **Backend pool endpoints** | Any virtual machines or virtual machine scale sets in a single virtual network. This includes usage of a single availability set. | Virtual machines in a single availability set or virtual machine scale set. |
| **[Health probes](./load-balancer-custom-probe-overview.md#probe-protocol)** | TCP, HTTP, HTTPS | TCP, HTTP | | **[Health probe down behavior](./load-balancer-custom-probe-overview.md#probe-down-behavior)** | TCP connections stay alive on an instance probe down __and__ on all probes down. | TCP connections stay alive on an instance probe down. All TCP connections end when all probes are down. | | **Availability Zones** | Zone-redundant, zonal, or non-zonal frontend IP configurations can be used for inbound and outbound traffic | Not available |
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Before you can create your logic app, create a local project so that you can man
![Screenshot shows Explorer pane with project folder, workflow folder, and workflow.json file.](./media/create-single-tenant-workflows-visual-studio-code/local-project-created.png)
- [!INCLUDE [Visual Studio Code - logic app project structure](../../includes/logic-apps-single-tenant-project-structure-visual-studio-code.md)]
+ [!INCLUDE [Visual Studio Code - logic app project structure](includes/logic-apps-single-tenant-project-structure-visual-studio-code.md)]
> [!NOTE] >
logic-apps Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md
Title: DevOps deployment for single-tenant Azure Logic Apps
-description: Learn about DevOps deployment for single-tenant Azure Logic Apps.
+ Title: DevOps deployment for Standard logic apps
+description: Learn about DevOps deployment for Standard logic apps in single-tenant Azure Logic Apps.
ms.suite: integration Previously updated : 01/04/2024
-# Customer intent: As a developer, I want to learn about DevOps deployment support for single-tenant Azure Logic Apps.
Last updated : 10/23/2024
+# Customer intent: As a developer, I want to learn about DevOps deployment support for Standard logi apps in single-tenant Azure Logic Apps.
-# DevOps deployment for single-tenant Azure Logic Apps
+# DevOps deployment for Standard logic apps in single-tenant Azure Logic Apps
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)] With the trend towards distributed and native cloud apps, organizations are dealing with more distributed components across more environments. To maintain control and consistency, you can automate your environments and deploy more components faster and more confidently by using DevOps tools and processes.
-This article provides an introduction and overview about the current continuous integration and continuous deployment (CI/CD) experience for single-tenant Azure Logic Apps.
+This article provides an introduction and overview about the current continuous integration and continuous deployment (CI/CD) experience for Standard logic app workflows in single-tenant Azure Logic Apps.
<a name="single-tenant-versus-multi-tenant"></a> ## Single-tenant versus multi-tenant
-In the *multi-tenant* Azure Logic Apps, resource deployment is based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both logic apps and infrastructure. In *single-tenant* Azure Logic Apps, deployment becomes easier because you can separate resource provisioning between apps and infrastructure.
+In the *multi-tenant* Azure Logic Apps, resource deployment is based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both your Consumption logic app resources and infrastructure. In *single-tenant* Azure Logic Apps, deployment becomes easier because you can separate resource provisioning between Standard logic app resources and infrastructure.
-When you create logic apps using the **Logic App (Standard)** resource type, your workflows are powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) extensibility and is [hosted as an extension on the Azure Functions runtime](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564). This design provides portability, flexibility, and more performance for your logic apps plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
+When you create a Standard logic app resource, workflows are powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) extensibility and is [hosted as an extension on the Azure Functions runtime](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564). This design provides portability, flexibility, and more performance for Standard logic apps plus other capabilities and benefits inherited from the Azure Functions platform and Azure App Service ecosystem.
-For example, you can package the redesigned containerized runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your apps, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. For example, if your scenario requires containers, you can containerize your logic apps and integrate them into your existing pipelines.
+For example, you can package the redesigned containerized runtime and workflows together as part of your Standard logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy Standard logic apps, copy the artifacts to the host environment, and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. For example, if your scenario requires containers, you can containerize Standard logic apps and integrate them into your existing pipelines.
To set up and deploy your infrastructure resources, such as virtual networks and connectivity, you can continue using ARM templates and separately provision those resources along with other processes and pipelines that you use for those purposes.
Single-tenant Azure Logic Apps inherits many capabilities and benefits from the
### Local development and testing
-When you use Visual Studio Code with the Azure Logic Apps (Standard) extension, you can locally develop, build, and run single-tenant based logic app workflows in your development environment without having to deploy to Azure. If your scenario requires containers, you can create and deploy through [Azure Arc enabled Logic Apps](azure-arc-enabled-logic-apps-overview.md).
+When you use Visual Studio Code with the **Azure Logic Apps (Standard)** extension, you can locally develop, build, and run Standard logic app workflows in your development environment without having to deploy to Azure. If your scenario requires containers, you can create and deploy through [Azure Arc enabled Logic Apps](azure-arc-enabled-logic-apps-overview.md).
This capability is a major improvement and provides a substantial benefit compared to the multi-tenant model, which requires you to develop against an existing and running resource in Azure.
This capability is a major improvement and provides a substantial benefit compar
### Separate concerns
-The single-tenant model gives you the capability to separate the concerns between app and the underlying infrastructure. For example, you can develop, build, zip, and deploy your app separately as an immutable artifact to different environments. Logic app workflows typically have "application code" that you update more often than the underlying infrastructure. By separating these layers, you can focus more on building out your logic app's workflow and spend less on your effort to deploy the required resources across multiple environments.
+The single-tenant model gives you the capability to separate the concerns between your logic app and the underlying infrastructure. For example, you can develop, build, zip, and deploy your app separately as an immutable artifact to different environments. Logic app workflows typically have "application code" that you update more often than the underlying infrastructure. By separating these layers, you can focus more on building out your logic app's workflow and spend less on your effort to deploy the required resources across multiple environments.
![Conceptual diagram showing separate deployment pipelines for apps and infrastructure.](./media/devops-deployment-single-tenant/deployment-pipelines-logic-apps.png)
The single-tenant model gives you the capability to separate the concerns betwee
### Logic app resource structure ### Logic app project structure <a name="deployment-containers"></a>
In single-tenant Azure Logic Apps, you can call and reference your environment v
## Managed connectors and built-in operations
-The Azure Logic Apps ecosystem provides [hundreds of Microsoft-managed connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) and built-in operations as part of a constantly growing collection that you can use in single-tenant Azure Logic Apps. The way that Microsoft maintains these connectors and built-in operations stays mostly the same in single-tenant Azure Logic Apps.
+The Azure Logic Apps ecosystem provides [over 1,000 Microsoft-managed and Azure-hosted connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) and [built-in operations](/azure/logic-apps/connectors/built-in/reference) as part of a constantly growing collection that you can use in single-tenant Azure Logic Apps. The way that Microsoft maintains managed connectors stays mostly the same in single-tenant Azure Logic Apps as in multi-tenant Azure Logic Apps.
-The most significant improvement is that the single-tenant service makes more popular managed connectors also available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and others. Meanwhile, the managed connector versions are still available and continue to work.
+The most significant improvement is that the single-tenant service makes more popular managed connectors available as built-in operations. For example, you can use built-in operations for Azure Service Bus, Azure Event Hubs, SQL, and many others. Meanwhile, the managed connector versions are still available and continue to work.
-The connections that you create using built-in operations are called built-in connections, or *service provider connections*. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the redesigned Logic Apps runtime. In contrast, managed connections, or API connections, are created and run separately as Azure resources, which you deploy using ARM templates. As a result, built-in operations and their connections provide better performance due to their proximity to your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
+The connections that you create using Azure Service-based built-in operations are called built-in connections, or *service provider-based connections*. Built-in operations and their connections run locally in the same process that runs your workflows. Both are hosted on the redesigned Azure Logic Apps runtime. In contrast, managed connections, or API connections, are created and run separately as Azure resources, which you deploy using ARM templates. As a result, built-in operations and their connections provide better performance due to their proximity to your workflows. This design also works well with deployment pipelines because the service provider connections are packaged into the same build artifact.
-In Visual Studio Code, when you use the designer to develop or make changes to your workflows, the Logic Apps engine automatically generates any necessary connection metadata in your project's **connections.json** file. The following sections describe the three kinds of connections that you can create in your workflows. Each connection type has a different JSON structure, which is important to understand because endpoints change when you move between environments.
+In Visual Studio Code, when you use the designer to develop or make changes to your workflows, the single-tenant Azure Logic Apps engine automatically generates any necessary connection metadata in your project's **connections.json** file. The following sections describe the three kinds of connections that you can create in your workflows. Each connection type has a different JSON structure, which is important to understand because endpoints change when you move between environments.
<a name="service-provider-connections"></a>
When you use a built-in operation for a service such as Azure Service Bus or Azu
> You can then capture dynamically generated infrastructure values, such as connection endpoints, storage strings, and more. > For more information, see [Application types for the Microsoft identity platform](/entra/identity-platform/v2-app-types).
-In your logic app project, each workflow has a workflow.json file that contains the workflow's underlying JSON definition. This workflow definition then references the necessary connection strings in your project's connections.json file.
+In your Standard logic app project, each workflow has a **workflow.json** file that contains the workflow's underlying JSON definition. This workflow definition then references the necessary connection strings in your project's **connections.json** file.
-The following example shows how the service provider connection for a built-in Service Bus operation appears in your project's connections.json file:
+The following example shows how the service provider connection for an Azure Service Bus built-in operation appears in your project's **connections.json** file:
```json "serviceProviderConnections": {
The following example shows how the service provider connection for a built-in S
When you use a managed connector for the first time in your workflow, you're prompted to create a managed API connection for the target service or system and authenticate your identity. These connectors are managed by the shared connectors ecosystem in Azure. The API connections exist and run as separate resources in Azure.
-In Visual Studio Code, while you continue to create and develop your workflow using the designer, the Logic Apps engine automatically creates the necessary resources in Azure for the managed connectors in your workflow. The engine automatically adds these connection resources to the Azure resource group that you designed to contain your logic app.
+In Visual Studio Code, while you continue to create and develop your workflow using the designer, the single-tenant Azure Logic Apps engine automatically creates the necessary resources in Azure for the managed connectors in your workflow. The engine automatically adds these connection resources to the Azure resource group that you designed to contain your logic app.
-The following example shows how an API connection for the managed Service Bus connector appears in your project's connections.json file:
+The following example shows how an API connection for the Azure Service Bus managed connector appears in your project's **connections.json** file:
```json "managedApiConnections": {
The following example shows how an API connection for the managed Service Bus co
### Azure Functions connections
-To call functions created and hosted in Azure Functions, you use the built-in Azure Functions operation. Connection metadata for Azure Functions calls is different from other built-in-connections. This metadata is stored in your logic app project's connections.json file, but looks different:
+To call functions created and hosted in Azure Functions, you use the Azure Functions built-in operation. Connection metadata for Azure Functions calls is different from other built-in-connections. This metadata is stored in your logic app project's **connections.json** file, but looks different:
```json "functionConnections": {
To call functions created and hosted in Azure Functions, you use the built-in Az
## Authentication
-In single-tenant Azure Logic Apps, the hosting model for logic app workflows is a single tenant where your workloads benefit from more isolation than in the multi-tenant model. Plus, the single-tenant Azure Logic Apps runtime is portable, which means you can run your workflows in other environments, for example, locally in Visual Studio Code. Still, this design requires a way for logic apps to authenticate their identity so they can access the managed connector ecosystem in Azure. Your apps also need the correct permissions to run operations when using managed connections.
+In single-tenant Azure Logic Apps, the hosting model for logic app workflows is a single Microsoft Entra tenant where your workloads benefit from more isolation than in the multi-tenant model. Plus, the single-tenant Azure Logic Apps runtime is portable, which means you can run your workflows in other environments, for example, locally in Visual Studio Code. Still, this design requires a way for logic apps to authenticate their identity so they can access the managed connector ecosystem in Azure. Your apps also need the correct permissions to run operations when using managed connections.
By default, each single-tenant based logic app has an automatically enabled system-assigned managed identity. This identity differs from the authentication credentials or connection string used for creating a connection. At runtime, your logic app uses this identity to authenticate its connections through Azure access policies. If you disable this identity, connections won't work at runtime.
-The following sections provide more information about the authentication types that you can use to authenticate managed connections, based on where your logic app runs. For each managed connection, your logic app project's connections.json file has an `authentication` object that specifies the authentication type that your logic app can use to authenticate that managed connection.
+The following sections provide more information about the authentication types that you can use to authenticate managed connections, based on where your logic app runs. For each managed connection, your logic app project's **connections.json** file has an **`authentication`** object that specifies the authentication type that your logic app can use to authenticate that managed connection.
### Managed identity
-For a logic app that is hosted and run in Azure, a [managed identity](create-managed-service-identity.md) is the default and recommended authentication type to use for authenticating managed connections that are hosted and run in Azure. In your logic app project's connections.json file, the managed connection has an `authentication` object that specifies `ManagedServiceIdentity` as the authentication type:
+For a logic app that is hosted and run in Azure, a [managed identity](create-managed-service-identity.md) is the default and recommended authentication type to use for authenticating managed connections that are hosted and run in Azure. In your logic app project's **connections.json** file, the managed connection has an **`authentication`** object that specifies **`ManagedServiceIdentity`** as the authentication type:
```json "authentication": {
For a logic app that is hosted and run in Azure, a [managed identity](create-man
### Raw
-For logic apps that run in your local development environment using Visual Studio Code, raw authentication keys are used for authenticating managed connections that are hosted and run in Azure. These keys are designed for development use only, not production, and have a 7-day expiration. In your logic app project's connections.json file, the managed connection has an `authentication` object specifies the following the authentication information:
+For logic apps that run in your local development environment using Visual Studio Code, raw authentication keys are used for authenticating managed connections that are hosted and run in Azure. These keys are designed for development use only, not production, and have a 7-day expiration. In your logic app project's **connections.json** file, the managed connection has an **`authentication`** object specifies the following the authentication information:
```json "authentication": {
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
For more information about setting up your logic apps for deployment, see the fo
## Visual Studio Code project structure <a name="reference-local-settings-json"></a>
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
Title: Set up DevOps for Standard logic apps
-description: How to set up DevOps deployment for Standard logic app workflows in single-tenant Azure Logic Apps.
+description: How to set up DevOps deployment for Standard logic apps in single-tenant Azure Logic Apps.
ms.suite: integration Previously updated : 06/12/2024-
-# Customer intent: As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
+ai-usage: ai-assisted
Last updated : 10/24/2024
+# Customer intent: As a developer, I want to automate deployment for Standard logic apps hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
-# Set up DevOps deployment for Standard logic app workflows in single-tenant Azure Logic Apps
+# Set up DevOps deployment for Standard logic apps in single-tenant Azure Logic Apps
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-This article shows how to deploy a Standard logic app project to single-tenant Azure Logic Apps from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment overview for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md).
+This guide primarily shows how to set up deployment for a Standard logic app project in Visual Studio Code to your infrastructure using DevOps tools and processes. If your Standard logic app exists in the Azure portal instead, you can download your logic app's artifact files for use with DevOps deployment. Based on whether you want to use GitHub or Azure DevOps, you then choose the path and tools that work best for your deployment scenario.
+
+If you don't have a Standard logic app, you can still follow this guide using the linked sample Standard logic app projects plus examples for deployment to Azure through GitHub or Azure DevOps. For more information, review [DevOps deployment overview for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md).
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- A Standard logic app project created with [Visual Studio Code and the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
+- [Visual Studio Code, which is free, the Azure Logic Apps (Standard) extension for Visual Studio Code, and other related prerequisites](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
+
+- The Standard logic app to use with your DevOps tools and processes.
- If you haven't already set up your logic app project or infrastructure, you can use the included sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use. For more information about these sample projects and the resources included to run the example logic app, review [Deploy your infrastructure](#deploy-infrastructure).
+ You can either download the artifact files for your Standard logic app resource from the Azure portal, or you can use a Standard logic app project created with [Visual Studio Code and the Azure Logic Apps (Standard) extension for Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
-- If you want to deploy to Azure, you need an existing **Logic App (Standard)** resource created in Azure. To quickly create an empty logic app resource, review [Create single-tenant based logic app workflows - Portal](create-single-tenant-workflows-azure-portal.md).
+ - **Portal**: The downloaded zip file contains Standard logic app artifact files, such as **workflow.json**, **connections.json**, **host.json**, and **local.settings.json**. See [Download Standard logic app artifact files from portal](#download-artifacts).
+
+ - **Visual Studio Code**: You need an empty Standard logic app resource in the Azure portal for your deployment destination. To quickly create an empty Standard logic app resource, review [Create single-tenant based logic app workflows - Portal](create-single-tenant-workflows-azure-portal.md).
+
+ If you don't have an existing logic app or infrastructure, you can use the linked sample Standard logic app projects to deploy an example logic app and infrastructure, based whether you want to use GitHub or Azure DevOps. For more information about the included sample projects and resources to run the example logic app, review [Deploy infrastructure resources](#deploy-infrastructure).
<a name="deploy-infrastructure"></a> ## Deploy infrastructure resources
-If you haven't already set up a logic app project or infrastructure, you can use the following sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use:
+To try the DevOps deployment experience without prior Standard logic app or infrastructure setup, use the following sample projects so you can set up deployment for an example Standard logic app and infrastructure, based whether you want to use GitHub or Azure DevOps:
- [GitHub sample for single-tenant Azure Logic Apps](https://github.com/Azure/logicapps/tree/master/github-sample)
- This sample includes an example logic app project for single-tenant Azure Logic Apps plus examples for Azure deployment and GitHub Actions.
+ This sample includes an example Standard logic app project plus examples for Azure deployment and GitHub Actions.
- [Azure DevOps sample for single-tenant Azure Logic Apps](https://github.com/Azure/logicapps/tree/master/azure-devops-sample)
- This sample includes an example logic app project for single-tenant Azure Logic Apps plus examples for Azure deployment and Azure Pipelines.
-
-Both samples include the following resources that a logic app uses to run.
+ This sample includes an example Standard logic app project plus examples for Azure deployment and Azure Pipelines.
+
+Both samples include the following resources that a Standard logic app uses to run:
| Resource name | Required | Description | ||-|-|
-| Logic App (Standard) | Yes | This Azure resource contains the workflows that run in single-tenant Azure Logic Apps. |
-| Functions Premium or App Service hosting plan | Yes | This Azure resource specifies the hosting resources to use for running your logic app, such as compute, processing, storage, networking, and so on. <p><p>**Important**: In the current experience, the **Logic App (Standard)** resource requires the [**Workflow Standard** hosting plan](logic-apps-pricing.md#standard-pricing), which is based on the Functions Premium hosting plan. |
+| Standard logic app | Yes | This Azure resource contains the workflows that run in single-tenant Azure Logic Apps. <br><br>**Important**: In your logic app project, each workflow has a **workflow.json** file that contains the workflow definition, which includes the trigger and action definitions. |
+| API connections | Yes, if API connections exist | These Azure resources define any managed API connections that your workflows use to run managed connector operations, such as Office 365, SharePoint, and so on. <br><br>**Important**: In your logic app project, the **connections.json** file contains metadata, endpoints, and keys for any managed API connections and Azure functions that your workflows use. To use different connections and functions in each environment, make sure that you parameterize the **connections.json** file and update the endpoints. <br><br>For more information, review [API connection resources and access policies](#api-connection-resources). |
+| Functions Premium or App Service hosting plan | Yes | This Azure resource specifies the hosting resources to use for running your logic app, such as compute, processing, storage, networking, and so on. <br><br>**Important**: In the current experience, the Standard logic app resource requires the [**Workflow Standard** hosting plan](logic-apps-pricing.md#standard-pricing), which is based on the Azure Functions Premium hosting plan. |
| Azure storage account | Yes, for both stateful and stateless workflows | This Azure resource stores the metadata, keys for access control, state, inputs, outputs, run history, and other information about your workflows. | | Application Insights | Optional | This Azure resource provides monitoring capabilities for your workflows. |
-| API connections | Optional, if none exist | These Azure resources define any managed API connections that your workflows use to run managed connector operations, such as Office 365, SharePoint, and so on. <p><p>**Important**: In your logic app project, the **connections.json** file contains metadata, endpoints, and keys for any managed API connections and Azure functions that your workflows use. To use different connections and functions in each environment, make sure that you parameterize the **connections.json** file and update the endpoints. <p><p>For more information, review [API connection resources and access policies](#api-connection-resources). |
| Azure Resource Manager (ARM) template | Optional | This Azure resource defines a baseline infrastructure deployment that you can reuse or [export](../azure-resource-manager/templates/template-tutorial-export-template.md). |
-||||
<a name="api-connection-resources"></a> ## API connection resources and access policies
-In single-tenant Azure Logic Apps, every managed or API connection resource in your workflows requires an associated access policy. This policy needs your logic app's identity to provide the correct permissions for accessing the managed connector infrastructure. The included sample projects include an ARM template that includes all the necessary infrastructure resources, including these access policies.
+In single-tenant Azure Logic Apps, every managed API connection resource in your workflow requires an associated access policy. This policy needs your logic app's identity to provide the correct permissions for accessing the managed connector infrastructure. The included sample projects include an ARM template that includes all the necessary infrastructure resources, including these access policies.
+
+For example, the following diagram shows the dependencies between a Standard logic app project and infrastructure resources:
+
+![Conceptual diagram shows infrastructure dependencies for Standard logic app project in the single-tenant Azure Logic Apps model.](./media/set-up-devops-deployment-single-tenant-azure-logic-apps/infrastructure-dependencies.png)
+
+<a name="download-artifacts"></a>
+
+## Download Standard logic app artifacts from portal
+
+If your Standard logic app is in the Azure portal, you can download a zip file that contains your logic app's artifact files, including **workflow.json**, **connections.json**, **host.json**, and **local.settings.json**.
-The following diagram shows the dependencies between your logic app project and infrastructure resources:
+1. In the [Azure portal](https://portal.azure.com), find and open your Standard logic app resource.
-![Conceptual diagram showing infrastructure dependencies for a logic app project in the single-tenant Azure Logic Apps model.](./media/set-up-devops-deployment-single-tenant-azure-logic-apps/infrastructure-dependencies.png)
+1. On the logic app menu, select **Overview**.
+
+1. On the **Overview** toolbar, select **Download app content**. In the confirmation box that appears, select **Download**.
+
+1. When the prompt appears, select **Save as**, browse to the local folder that you want, and select **Save** to save the zip file.
+
+1. Extract the zip file.
+
+1. In Visual Studio Code, open the folder that contains the unzipped files.
+
+ When you open the folder, Visual Studio Code automatically creates a [workspace](https://code.visualstudio.com/docs/editor/workspaces).
+
+1. Edit the folder's contents to include only the folders and files required for deployment using DevOps.
+
+1. When you finish, save your changes.
<a name="deploy-logic-app-resources"></a>
-## Deploy logic app resources (zip deploy)
+## Build and deploy logic app (zip deploy)
-After you push your logic app project to your source repository, you can set up build and release pipelines either inside or outside Azure that deploy logic apps to infrastructure.
+You can set up build and release pipelines either inside or outside Azure that deploy Standard logic apps to your infrastructure.
### Build your project
-To set up a build pipeline based on your logic app project type, complete the corresponding actions in the following table:
+1. Push your Standard logic app project and artifact files to your source repository, for example, either GitHub or Azure DevOps.
+
+1. Set up a build pipeline based on your logic app project type by completing the following corresponding actions:
+
+ | Project type | Description and steps |
+ |--|--|
+ | Nuget-based | The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the documentation for [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild). |
+ | Bundle-based | The extension bundle-based project isn't language-specific and doesn't require any language-specific build steps. |
-| Project type | Description and steps |
-|--|--|
-| Nuget-based | The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the documentation for [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild). |
-| Bundle-based | The extension bundle-based project isn't language-specific and doesn't require any language-specific build steps. You can use any method to zip your project files. <br><br>**Important**: Make sure that your .zip file contains the actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files. |
+1. Zip your project files using any method that you want.
-### Before release to Azure
+ > [!IMPORTANT]
+ >
+ > Make sure that your zip file contains your project's actual build artifacts at the root level,
+ > including all workflow folders, configuration files such as **host.json**, **connections.json**,
+ > **local.settings.json**, and any other related files. Don't add any extra folders nor put any
+ > artifacts into folders that don't already exist in your project structure.
+ >
+ > For example, the following list shows an example **MyBuildArtifacts.zip** file structure:
+ >
+ > ```
+ > MyStatefulWorkflow1-Folder
+ > MyStatefulWorkflow2-Folder
+ > connections.json
+ > host.json
+ > local.settings.json
+ > ```
+
+### Before you release to Azure
The managed API connections inside your logic app project's **connections.json** file are created specifically for local use in Visual Studio Code. Before you can release your project artifacts from Visual Studio Code to Azure, you have to update these artifacts. To use the managed API connections in Azure, you have to update their authentication methods so that they're in the correct format to use in Azure. #### Update authentication type
-For each managed API connection that uses authentication, you have to update the **authentication** object from the local format in Visual Studio Code to the Azure portal format, as shown by the first and second code examples, respectively:
+For each managed API connection that uses authentication, you have to update the **`authentication`** object from the local format in Visual Studio Code to the Azure portal format, as shown by the first and second code examples, respectively:
**Visual Studio Code format**
For each managed API connection that uses authentication, you have to update the
"managedApiConnections": { "sql": { "api": {
- "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/westus/managedApis/sql"
- },
- "connection": {
- "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/ase/providers/Microsoft.Web/connections/sql-8"
- },
- "connectionRuntimeUrl": "https://xxxxxxxxxxxxxx.01.common.logic-westus.azure-apihub.net/apim/sql/xxxxxxxxxxxxxxxxxxxxxxxxx/",
- "authentication": {
- "type": "Raw",
- "scheme": "Key",
- "parameter": "@appsetting('sql-connectionKey')"
+ "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/providers/Microsoft.Web/locations/westus/managedApis/sql"
+ },
+ "connection": {
+ "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ase/providers/Microsoft.Web/connections/sql-2"
+ },
+ "connectionRuntimeUrl": "https://xxxxxxxxxxxxxx.01.common.logic-westus.azure-apihub.net/apim/sql/xxxxxxxxxxxxxxxxxxxxxxxxx/",
+ "authentication": {
+ "type": "Raw",
+ "scheme": "Key",
+ "parameter": "@appsetting('sql-connectionKey')"
+ }
} } }
For each managed API connection that uses authentication, you have to update the
"managedApiConnections": { "sql": { "api": {
- "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/westus/managedApis/sql"
- },
- "connection": {
- "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/ase/providers/Microsoft.Web/connections/sql-8"
- },
- "connectionRuntimeUrl": "https://xxxxxxxxxxxxxx.01.common.logic-westus.azure-apihub.net/apim/sql/xxxxxxxxxxxxxxxxxxxxxxxxx/",
- "authentication": {
- "type": "ManagedServiceIdentity",
+ "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/providers/Microsoft.Web/locations/westus/managedApis/sql"
+ },
+ "connection": {
+ "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/ase/providers/Microsoft.Web/connections/sql-2"
+ },
+ "connectionRuntimeUrl": "https://xxxxxxxxxxxxxx.01.common.logic-westus.azure-apihub.net/apim/sql/xxxxxxxxxxxxxxxxxxxxxxxxx/",
+ "authentication": {
+ "type": "ManagedServiceIdentity",
+ }
} } }
For each managed API connection that uses authentication, you have to update the
#### Create API connections as needed
-If you're deploying your logic app workflow to an Azure region or subscription different from your local development environment, you must also make sure to create these managed API connections before deployment. Azure Resource Manager template (ARM template) deployment is the easiest way to create managed API connections.
+If you're deploying your Standard logic app to an Azure region or subscription different from your local development environment, you must also make sure to create these managed API connections before deployment. Azure Resource Manager template (ARM template) deployment is the easiest way to create managed API connections.
The following example shows a SQL managed API connection resource definition in an ARM template:
The following example shows a SQL managed API connection resource definition in
"properties": { "displayName": "sqltestconnector", "api": {
- "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/sql"
+ "id": "/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/sql"
}, "parameterValues": { "authType": "windows",
The following example shows a SQL managed API connection resource definition in
} ```
-To find the values that you need to use in the **properties** object for completing the connection resource definition, you can use the following API for a specific connector:
+To find the required values for the **`properties`** object so that you can complete the connection resource definition, use the following API for a specific connector:
`GET https://management.azure.com/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/{connector-name}?api-version=2016-06-01`
-In the response, find the **connectionParameters** object, which contains the necessary information to complete the resource definition for that specific connector. The following example shows an example resource definition for a SQL managed connection:
+In the response, find the **`connectionParameters`** object, which contains the necessary information to complete the resource definition for that specific connector. The following example shows an example resource definition for a SQL managed connection:
```json {
In the response, find the **connectionParameters** object, which contains the ne
} ```
-As an alternative, you can capture and review the network trace for when you create a connection using the workflow designer in Azure Logic Apps. Find the `PUT` call that's sent to the connector's managed API as previously described, and review the request body for all the necessary information.
+As an alternative, you can capture and review the network trace for when you create a connection using the workflow designer in Azure Logic Apps. Find the **`PUT`** call that is sent to the managed connector's API as previously described, and review the request body for all the necessary information.
#### On-premises data gateway resource definition
If your connection uses an on-premises data gateway resource, this resource defi
To set up a release pipeline that deploys to Azure, follow the associated steps for GitHub, Azure DevOps, or Azure CLI.
-> [!NOTE]
-> Azure Logic Apps currently doesn't support Azure deployment slots.
- #### [GitHub](#tab/github) For GitHub deployments, you can deploy your logic app by using [GitHub Actions](https://docs.github.com/actions), for example, the GitHub Actions in Azure Functions. This action requires that you pass through the following information: - The logic app name to use for deployment-- The zip file that contains your actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files.
+- The zip file that contains your actual build artifacts, including all workflow folders, configuration files such as **host.json**, **connections.json**, **local.settings.json**, and any other related files.
- Your [publish profile](../azure-functions/functions-how-to-github-actions.md#generate-deployment-credentials), which is used for authentication ```yaml
For GitHub deployments, you can deploy your logic app by using [GitHub Actions](
publish-profile: 'MyLogicAppPublishProfile' ```
-For more information, review the [Continuous delivery by using GitHub Action](../azure-functions/functions-how-to-github-actions.md) documentation.
+For more information, review [Continuous delivery by using GitHub Action](../azure-functions/functions-how-to-github-actions.md).
-#### [Azure DevOps](#tab/azure-devops)
+### [Azure DevOps](#tab/azure-devops)
For Azure DevOps deployments, you can deploy your logic app by using the [Azure Function App Deploy task](/azure/devops/pipelines/tasks/deploy/azure-function-app?view=azure-devops&preserve-view=true) in Azure Pipelines. This action requires that you pass through the following information: - The logic app name to use for deployment-- The zip file that contains your actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files.
+- The zip file that contains your actual build artifacts, including all workflow folders, configuration files such as **host.json**, **connections.json**, **local.settings.json**, and any other related files.
- Your [publish profile](../azure-functions/functions-how-to-github-actions.md#generate-deployment-credentials), which is used for authentication ```yaml
For Azure DevOps deployments, you can deploy your logic app by using the [Azure
deploymentMethod: 'zipDeploy' ```
-For more information, review the [Deploy an Azure Function using Azure Pipelines](/azure/devops/pipelines/targets/azure-functions-windows) documentation.
+For more information, review [Deploy an Azure Function using Azure Pipelines](/azure/devops/pipelines/targets/azure-functions-windows).
#### [Azure CLI](#tab/azure-cli)
-If you use other deployment tools, you can deploy your single-tenant based logic app by using the Azure CLI. Before you start, you need to have the following items:
+If you use other deployment tools, you can deploy your Standard logic app by using the Azure CLI. Before you start, you need the following items:
- The latest Azure CLI extension installed on your local computer.
- - If you don't have this extension, review the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
+ - If you're not sure that you have the latest version, [check your environment and CLI version](#check-environment-cli-version).
+
+ - If you don't have the Azure CLI extension, [install the extension by following the installation guide for your operating system or platform](/cli/azure/install-azure-cli).
- - If you're not sure that you have the latest version, follow the [steps to check your environment and CLI version](#check-environment-cli-version).
+ > [!NOTE]
+ >
+ > If you get a **pip** error when you try to install the Azure CLI, make sure that you
+ > have the standard package installer for Python (PIP). This package manager is written
+ > in Python and is used to install software packages. For more information, see
+ > [Check "pip" installation and version](#check-pip-version).
-- The *preview* single-tenant Azure Logic Apps (Standard) extension for Azure CLI.
+- The *preview* single-tenant **Azure Logic Apps (Standard)** extension for Azure CLI.
- If you don't have this extension, follow the [steps to install the extension](#install-logic-apps-cli-extension). Although single-tenant Azure Logic Apps is generally available, the single-tenant Azure Logic Apps extension for Azure CLI is still in preview.
+ If you don't have this extension, [install the extension](#install-logic-apps-cli-extension). Although the single-tenant Azure Logic Apps service is already generally available, the single-tenant Azure Logic Apps extension for Azure CLI is still in preview.
-- An Azure resource group to use for deploying your logic app.
+- An Azure resource group to use for deploying your logic app project to Azure.
- If you don't have this resource group, follow the [steps to create the resource group](#create-resource-group).
+ If you don't have this resource group, [create the resource group](#create-resource-group).
- An Azure storage account to use with your logic app for data and run history retention.
- If you don't have this storage account, follow the [steps to create a storage account](/cli/azure/storage/account#az-storage-account-create).
+ If you don't have this storage account, [create a storage account](/cli/azure/storage/account#az-storage-account-create).
+
+<a name="check-pip-version"></a>
+
+##### Check pip installation
+
+1. On a Windows or Mac operating system, open a command prompt, and enter the following command:
+
+ `pip --version`
+
+ - If you get a **pip** version, then **pip** is installed. Make sure that you have the most recent version by using the following command:
+
+ `python -m pip install -upgrade pip`
+
+ - If you get errors instead, then **pip** isn't installed or added to your **PATH** environment.
+
+1. To install **pip**, [follow the **pip** installation steps for your operating system](https://pip.pypa.io/en/latest/installation/).
<a name="check-environment-cli-version"></a> ##### Check environment and CLI version
-1. Sign in to the [Azure portal](https://portal.azure.com). In a terminal or command window, confirm that your subscription is active by running the command, [`az login`](/cli/azure/authenticate-azure-cli):
+1. Sign in to the [Azure portal](https://portal.azure.com). In a terminal or command window, confirm that your subscription is active by running the command, [**`az login`**](/cli/azure/authenticate-azure-cli):
- ```azurecli-interactive
+ ```azurecli
az login ```
-1. In a terminal or command window, check your version of the Azure CLI version by running the command, `az`, with the following required parameter:
+1. In the terminal or command window, check your version of the Azure CLI version by running the command, **`az`**, with the following required parameter:
- ```azurecli-interactive
+ ```azurecli
az --version ``` 1. If you don't have the latest Azure CLI version, update your installation by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
- For more information about the latest version, review the [most recent release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli).
+ For more information about the latest version, review the [most recent release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli). For troubleshooting guidance, see the following resources:
+
+ - [Azure CLI GitHub issues](https://github.com/Azure/azure-cli/issues)
+ - [Azure CLI documentation](/cli/azure/)
<a name="install-logic-apps-cli-extension"></a> ##### Install Azure Logic Apps (Standard) extension for Azure CLI
-Currently, only the *preview* version for this extension is available. If you haven't previously installed this extension, run the command, `az extension add`, with the following required parameters:
+Currently, only the *preview* version for this extension is available. If you didn't install this extension yet, run the command, **`az extension add`**, with the following required parameters:
-```azurecli-interactive
+```azurecli
az extension add --yes --source "https://aka.ms/logicapp-latest-py2.py3-none-any.whl" ``` To get the latest extension, which is version 0.1.2, run these commands to remove the existing extension and then install the latest version from the source:
-```azurecli-interactive
+```azurecli
az extension remove --name logicapp az extension add --yes --source "https://aka.ms/logicapp-latest-py2.py3-none-any.whl" ``` > [!NOTE]
+>
> If a new extension version is available, the current and later versions show a message. > While this extension is in preview, you can use the following command to upgrade to the > latest version without manually removing and installing again:
az extension add --yes --source "https://aka.ms/logicapp-latest-py2.py3-none-any
<a name="create-resource-group"></a>
-#### Create resource group
+##### Create resource group
-If you haven't already set up a resource group for your logic app, create the group by running the command, `az group create`. Unless you already set a default subscription for your Azure account, make sure to use the `--subscription` parameter with your subscription name or identifier. Otherwise, you don't have to use the `--subscription` parameter.
+If you don't have an existing Azure resource group to use for deployment, create the group by running the command, **`az group create`**. Unless you already set a default subscription for your Azure account, make sure to use the **`--subscription`** parameter with your subscription name or identifier. Otherwise, you don't have to use the **`--subscription`** parameter.
> [!TIP]
-> To set a default subscription, run the following command, and replace `MySubscription` with your subscription name or identifier.
+>
+> To set a default subscription, run the following command, and replace
+> **`MySubscription`** with your subscription name or identifier.
> > `az account set --subscription MySubscription`
-For example, the following command creates a resource group named `MyResourceGroupName` using the Azure subscription named `MySubscription` in the location `eastus`:
+For example, the following command creates a resource group named **`MyResourceGroupName`** using the Azure subscription named **`MySubscription`** in the location **`eastus`**:
```azurecli az group create --name MyResourceGroupName
az group create --name MyResourceGroupName
--location eastus ```
-If your resource group is successfully created, the output shows the `provisioningState` as `Succeeded`:
+If your resource group is successfully created, the output shows the **`provisioningState`** as **`Succeeded`**:
```output <...>
If your resource group is successfully created, the output shows the `provisioni
##### Deploy logic app
-To deploy your zipped artifact to an Azure resource group, run the command, `az logicapp deployment`, with the following required parameters:
+Now, you can deploy your zipped artifacts to the Azure resource group that you created.
-> [!IMPORTANT]
-> Make sure that your zip file contains your project's artifacts at the root level. These artifacts include all workflow folders,
-> configuration files such as host.json, connections.json, and any other related files. Don't add any extra folders nor put any artifacts
-> into folders that don't already exist in your project structure. For example, this list shows an example MyBuildArtifacts.zip file structure:
->
-> ```output
-> MyStatefulWorkflow1-Folder
-> MyStatefulWorkflow2-Folder
-> connections.json
-> host.json
-> ```
-
-```azurecli-interactive
+Run the command, **`az logicapp deployment`**, with the following required parameters:
+
+```azurecli
az logicapp deployment source config-zip --name MyLogicAppName --resource-group MyResourceGroupName --subscription MySubscription --src MyBuildArtifact.zip
az logicapp deployment source config-zip --name MyLogicAppName
-### After release to Azure
+## After deployment to Azure
-Each API connection has access policies. After the zip deployment completes, you must open your logic app resource in the Azure portal, and create access policies for each API connection to set up permissions for the deployed logic app. The zip deployment doesn't create app settings for you. So, after deployment, you must create these app settings based on the **local.settings.json** file in your local Visual Studio Code project.
+Each API connection has access policies. After the zip deployment completes, you must open your Standard logic app resource in the Azure portal, and create access policies for each API connection to set up permissions for the deployed logic app. The zip deployment doesn't create app settings for you. After deployment, you must create these app settings based on the **local.settings.json** file in your logic app project.
-## Next steps
+## Related content
- [DevOps deployment for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md)
migrate Discovered Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/discovered-metadata.md
ms. Previously updated : 02/24/2023- Last updated : 10/25/2024+ # Metadata discovered by Azure Migrate appliance
This article provides details of the metadata discovered by Azure Migrate applia
The [Azure Migrate appliance](migrate-appliance.md) is a lightweight appliance that the Azure Migrate: Discovery and assessment tool uses to discover servers running in your environment and send server configuration and performance metadata to Azure.
-Metadata discovered by the Azure Migrate appliance helps you to assess server readiness for migration to Azure, right-size servers and plans costs. Microsoft doesn't use this data in any license compliance audit.
+Metadata discovered by the Azure Migrate appliance helps you to assess server readiness for migration to Azure, right-size servers, and plan costs. Microsoft doesn't use this data in any license compliance audit.
## Collected metadata for VMware servers
-The appliance collects configuration, performance metadata, data about installed applications, roles and features (software inventory) and dependency data (if agentless dependency analysis is enabled) from servers running in your VMware environment.
+The appliance collects configuration, performance metadata, data about installed applications, roles, and features (software inventory) and dependency data (if agentless dependency analysis is enabled) from servers running in your VMware environment.
Here's the full list of server metadata that the appliance collects and sends to Azure:
NIC writes throughput (MB per second) | net.transmitted.average |Calculation fo
## Collected metadata for Hyper-V servers
-The appliance collects configuration, performance metadata, data about installed applications, roles and features (software inventory) and dependency data (if agentless dependency analysis is enabled) from servers running in your Hyper-V environment.
+The appliance collects configuration, performance metadata, data about installed applications, roles, and features (software inventory) and dependency data (if agentless dependency analysis is enabled) from servers running in your Hyper-V environment.
Here's the full list of server metadata that the appliance collects and sends to Azure.
Server type (Gen 1 or 2) | Msvm_VirtualSystemSettingData | VirtualSystemSubType
Server display name | Msvm_VirtualSystemSettingData | ElementName Server version | Msvm_ProcessorSettingData | VirtualQuantity Memory (bytes) | Msvm_MemorySettingData | VirtualQuantity
-Maximum memory that can be consumed by server | Msvm_MemorySettingData | Limit
+Maximum memory that the server can consume | Msvm_MemorySettingData | Limit
Dynamic memory enabled | Msvm_MemorySettingData | DynamicMemoryEnabled Operating system name/version/FQDN | Msvm_KvpExchangeComponent | GuestIntrinsicExchangeItems Name Data Server power status | Msvm_ComputerSystem | EnabledState
Hyper-V Virtual Network Adapter | Bytes Sent/Second | Calculation for server siz
## Collected data for Physical servers
-The appliance collects configuration, performance metadata, data about installed applications, roles and features (software inventory) and dependency data (if agentless [dependency analysis](concepts-dependency-visualization.md) is enabled) from physical servers or server running on other clouds like AWS, GCP, etc.
+The appliance collects configuration, performance metadata, data about installed applications, roles, and features (software inventory) and dependency data (if agentless [dependency analysis](concepts-dependency-visualization.md) is enabled) from physical servers or server running on other clouds like AWS, GCP, etc.
### Windows server metadata
Here's the full list of Linux server metadata that the appliance collects and se
**Data** | **Commands** | FQDN | cat /proc/sys/kernel/hostname, hostname -f
-Processor core count | cat/proc/cpuinfo \| awk '/^processor/{print $3}' \| wc -l
+Processor core count | cat /proc/cpuinfo \| awk '/^processor/{print $3}' \| wc -l
Memory allocated | cat /proc/meminfo \| grep MemTotal \| awk '{printf "%.0f", $2/1024}' BIOS serial number | lshw \| grep "serial:" \| head -n1 \| awk '{print $2}' <br/> /usr/sbin/dmidecode -t 1 \| grep 'Serial' \| awk '{ $1="" ; $2=""; print}' BIOS GUID | cat /sys/class/dmi/id/product_uuid
Here's the Linux server performance data that the appliance collects and sends t
## Software inventory data
-The appliance collects data about installed applications, roles and features (software inventory) from servers running in VMware environment/Hyper-V environment/physical servers or servers running on other clouds like AWS, GCP etc.
+The appliance collects data about installed applications, roles, and features (software inventory) from servers running in VMware environment/Hyper-V environment/physical servers or servers running on other clouds like AWS, GCP etc.
### Windows server applications data
nat-gateway Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/manage-nat-gateway.md
To remove a NAT gateway from an existing subnet, complete the following steps.
1. Under **Settings**, select **Subnets**.
-1. Select **Disassociate** to remove the NAT gateway from the configured subnet.
+1. To remove NAT gateway from **all** subnets, select **Disassociate**.
+2. To remove NAT gateway from only one of multiple subnets, unselect the checkbox next to the subnet and select **Save**.
You can now associate the NAT gateway with a different subnet or virtual network in your subscription. To delete the NAT gateway resource, complete the following steps.
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
Previously updated : 10/20/2024 Last updated : 10/25/2024 #CustomerIntent: As an Azure administrator, I want to learn about virtual network flow logs so that I can log my network traffic to analyze and optimize network performance.
The following table outlines the support scope of flow logs.
## Availability
-Virtual network flow logs are generally available in all Azure public regions.
+Virtual network flow logs are generally available in all Azure public regions and are currently in preview in Azure Government.
## Related content
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
The following table specifies parameters used to create Network Fabric,
| resource-group | Name of the resource group | "NFResourceGroup" |True | | location | Operator-Nexus Azure region | "eastus" |True | | resource-name | Name of the FabricResource | NF-ResourceName |True |
-| nf-sku |Fabric SKU ID is the SKU of the ordered BoM version. | M4-A400-A100-C16-ab |True | String|
+| nf-sku |Fabric SKU ID is the SKU of the ordered BoM version. See [Network Fabric SKUs](./reference-operator-nexus-fabric-skus.md). | M4-A400-A100-C16-ab |True | String|
|nfc-id|Network Fabric Controller "ARM resource ID"|**$prefix**/NFCName|True | | |rackcount|Number of compute racks per fabric. Possible values are 2-8|8|True | |serverCountPerRack|Number of compute servers per rack. Possible values are 4, 8, 12 or 16|16|True |
operator-nexus Troubleshoot Hardware Validation Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-hardware-validation-failure.md
Expanding `result_detail` for a given category shows detailed results.
} ```
+ * Firmware versions are logged as informational. The following component firmware versions are typically logged (depending on hardware model):
+ * BIOS
+ * iDRAC
+ * Complex Programmable Logic Device (CPLD)
+ * RAID Controller
+ * Backplane
+
+ * The HWV framework identifies problematic firmware versions and attempts to auto fix. The following example shows a successful iDRAC firmware fix (versions and task ID are illustrational only).
+
+ ```yaml
+ {
+ "system_info": {
+ "system_info_result": "Pass",
+ "result_detail": [
+ {
+ "field_name": "Integrated Dell Remote Access Controller - unsupported_firmware_check",
+ "comparison_result": "Pass",
+ "expected": "6.00.30.00 - unsupported_firmware",
+ "fetched": "7.10.30.00"
+ }
+ ],
+ "result_log": [
+ "Firmware autofix task /redfish/v1/TaskService/Tasks/JID_274085357727 completed"
+ ]
+ },
+ }
+ ```
+ ### Drive info category * Disk Checks Failure
Expanding `result_detail` for a given category shows detailed results.
} ```
+ * Allow listed critical alarms and warning alarms are logged as informational starting with Nexus release 3.14.
+
+ ```yaml
+ {
+ "field_name": "LCLog_Warning_Alarms - Non-Failing",
+ "comparison_result": "Info",
+ "expected": "Warning Alarm",
+ "fetched": "104473 2024-07-26T16:05:19-05:00 The Embedded NIC 1 Port 1 network link is down."
+ }
+ ```
+ * To check LC logs in BMC webui: `BMC` -> `Maintenance` -> `Lifecycle Log`
Expanding `result_detail` for a given category shows detailed results.
* To troubleshoot server power-on failure attempt a flea drain. If problem persists engage vendor.
+* Virtual Flea Drain
+ * HWV attempts a virtual flea drain for most failing checks. Flea drain attempts are logged under `health_info` -> `result_log`.
+
+ ```yaml
+ "result_log": [
+ "flea drain completed successfully",
+ ]
+ ```
+
+ * If the virtual flea drain fails, perform a physical flea drain as a first troubleshooting step.
+ * RAID cleanup failures * As part of RAID cleanup, the RAID controller configuration is reset. Dell server health check fails for RAID controller reset failure. A failed RAID cleanup action indicates an underlying hardware issue. The following example shows a failed RAID controller reset.
Expanding `result_detail` for a given category shows detailed results.
### Device login check * Device Login Check Considerations
- * The `device_login` check fails if the iDRAC isn't accessible or if the hardware validation plugin isn't able to sign-in.
+ * The `device_login` check fails if the iDRAC isn't reachable or if the hardware validation plugin isn't able to sign-in.
+
+ ```yaml
+ {
+ "device_login": "Fail - Unreachable"
+ }
+ ```
```yaml {
- "device_login": "Fail"
+ "device_login": "Fail - Unauthorized"
} ```
operator-service-manager Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/release-notes.md
Azure Operator Service Manager is a cloud orchestration service that enables aut
* Dependency Versions: Go/1.22.4 - Helm/3.15.2 ### Release Installation
-This release can be installed with as an update on top of release 2.0.2804-137. See [learn documentation](manage-network-function-operator.md) for more installation guidance.
+This release can be installed with as an update on top of release 2.0.2804-144. See [learn documentation](manage-network-function-operator.md) for more installation guidance.
#### Bugfix Related Updates The following bug fixes, or other defect resolutions, are delivered with this release, for either Network Function Operator (NFO) or resource provider (RP) components.
reliability Reliability Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-event-grid.md
You can choose between two failover options, Microsoft-initiated failover and cu
Enable this functionality by updating the configuration for your topic or domain. Select **Regional**. :::image type="content" source="../event-grid/media/availability-zones-disaster-recovery/configuration-page.png" alt-text="Screenshot showing the Configuration page for an Event Grid custom topic.":::
-
+
+If you use a [non-paired region](cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair), then regardless of the data residency configuration you select, your metadata will only be replicated within the region.
### Disaster recovery failover experience
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-atrest.md
Software as a Service (SaaS) customers typically have encryption at rest enabled
### Encryption at rest for PaaS customers
-Platform as a Service (PaaS) customer's data typically resides in a storage service such as Blob Storage but may also be cached or stored in the application execution environment, such as a virtual machine. To see the encryption at rest options available to you, examine the [Data encryption models: supporting services table](encryption-models.md#supporting-services) for the storage and application platforms that you use.
+Platform as a Service (PaaS) customer's data typically resides in a storage service such as Blob Storage but may also be cached or stored in the application execution environment, such as a virtual machine. To see the encryption at rest options available to you, examine the [Data encryption models](encryption-models.md) for the storage and application platforms that you use.
### Encryption at rest for IaaS customers
Infrastructure as a Service (IaaS) customers can have a variety of services and
#### Encrypted storage
-Like PaaS, IaaS solutions can leverage other Azure services that store data encrypted at rest. In these cases, you can enable the Encryption at Rest support as provided by each consumed Azure service. The [Data encryption models: supporting services table](encryption-models.md#supporting-services) enumerates the major storage, services, and application platforms and the model of Encryption at Rest supported.
+Like PaaS, IaaS solutions can leverage other Azure services that store data encrypted at rest. In these cases, you can enable the Encryption at Rest support as provided by each consumed Azure service. The [Data encryption models](encryption-models.md) enumerates the major storage, services, and application platforms and the model of Encryption at Rest supported.
#### Encrypted compute
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
When server-side encryption using customer-managed keys in customer-controlled h
- Significant setup, configuration, and ongoing maintenance costs - Increased dependency on network availability between the customer datacenter and Azure datacenters.
-## Supporting services
-
-The Azure services that support each encryption model:
-
-| Product, Feature, or Service | Server-Side Using Customer-Managed Key | Documentation |
-| | | |
-| **AI and Machine Learning** | | |
-| [Azure AI Search](/azure/search/) | Yes | |
-| [Azure AI services](/azure/cognitive-services/) | Yes, including Managed HSM | |
-| [Azure Machine Learning](/azure/machine-learning/) | Yes | |
-| [Content Moderator](/azure/cognitive-services/content-moderator/) | Yes, including Managed HSM | |
-| [Face](/azure/cognitive-services/face/) | Yes, including Managed HSM | |
-| [Language Understanding](/azure/cognitive-services/luis/) | Yes, including Managed HSM | |
-| [Azure OpenAI](/azure/ai-services/openai/) | Yes, including Managed HSM | |
-| [Personalizer](/azure/cognitive-services/personalizer/) | Yes, including Managed HSM | |
-| [QnA Maker](/azure/cognitive-services/qnamaker/) | Yes, including Managed HSM | |
-| [Speech Services](/azure/cognitive-services/speech-service/) | Yes, including Managed HSM | |
-| [Translator Text](/azure/cognitive-services/translator/) | Yes, including Managed HSM | |
-| [Power Platform](/power-platform/) | Yes, including Managed HSM | |
-| [Dataverse](/powerapps/maker/data-platform/) | Yes, including Managed HSM | |
-| [Dynamics 365](/dynamics365/) | Yes, including Managed HSM | |
-| **Analytics** | | |
-| [Azure Stream Analytics](/azure/stream-analytics/) | Yes\*\*, including Managed HSM | |
-| [Event Hubs](/azure/event-hubs/) | Yes | |
-| [Functions](/azure/azure-functions/) | Yes | |
-| [Azure Analysis Services](/azure/analysis-services/) | - | |
-| [Azure Data Catalog](/azure/data-catalog/) | - | |
-| [Azure HDInsight](/azure/hdinsight/) | Yes | |
-| [Azure Monitor Application Insights](/azure/azure-monitor/app/app-insights-overview) | Yes | |
-| [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) | Yes, including Managed HSM | |
-| [Azure Data Explorer](/azure/data-explorer/) | Yes | |
-| [Azure Data Factory](/azure/data-factory/) | Yes, including Managed HSM | |
-| [Azure Data Lake Store](/azure/data-lake-store/) | Yes, RSA 2048-bit | |
-| **Containers** | | |
-| [Azure Kubernetes Service](/azure/aks/) | Yes, including Managed HSM | |
-| [Container Instances](/azure/container-instances/) | Yes | |
-| [Container Registry](/azure/container-registry/) | Yes | |
-| **Compute** | | |
-| [Virtual Machines](/azure/virtual-machines/) | Yes, including Managed HSM | |
-| [Virtual Machine Scale Set](/azure/virtual-machine-scale-sets/) | Yes, including Managed HSM | |
-| [SAP HANA](/azure/sap/large-instances/hana-overview-architecture) | Yes | |
-| [App Service](/azure/app-service/) | Yes\*\*, including Managed HSM | |
-| [Automation](/azure/automation/) | Yes | |
-| [Azure Functions](/azure/azure-functions/) | Yes\*\*, including Managed HSM | |
-| [Azure portal](/azure/azure-portal/) | Yes\*\*, including Managed HSM | |
-| [Azure VMware Solution](/azure/azure-vmware/) | Yes, including Managed HSM | |
-| [Logic Apps](/azure/logic-apps/) | Yes | |
-| [Azure-managed applications](/azure/azure-resource-manager/managed-applications/overview) | Yes\*\*, including Managed HSM | |
-| [Service Bus](/azure/service-bus-messaging/) | Yes | |
-| [Site Recovery](/azure/site-recovery/) | Yes | |
-| **Databases** | | |
-| [SQL Server on Virtual Machines](/azure/virtual-machines/windows/sql/) | Yes | |
-| [Azure SQL Database](/azure/azure-sql/database/) | Yes, RSA 3072-bit, including Managed HSM | |
-| [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/) | Yes, RSA 3072-bit, including Managed HSM | |
-| [Azure Database for MariaDB](/azure/mariadb/) | - | |
-| [Azure Database for MySQL](/azure/mysql/) | Yes, including Managed HSM | |
-| [Azure Database for PostgreSQL](/azure/postgresql/) | Yes, including Managed HSM | |
-| [Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW) only)](/azure/synapse-analytics/) | Yes, RSA 3072-bit, including Managed HSM | |
-| [SQL Server Stretch Database](/sql/sql-server/stretch-database/) | Yes, RSA 3072-bit | |
-| [Table Storage](/azure/storage/tables/) | Yes | |
-| [Azure Cosmos DB](/azure/cosmos-db/) | Yes, including Managed HSM | [Configure CMKs (Key Vault)](/azure/cosmos-db/how-to-setup-cmk) and [Configure CMKs (Managed HSM)](/azure/cosmos-db/how-to-setup-customer-managed-keys-mhsm) |
-| [Azure Databricks](/azure/databricks/) | Yes, including Managed HSM | |
-| [Azure Database Migration Service](/azure/dms/) | N/A\* | |
-| **Identity** | | |
-| [Microsoft Entra ID](/azure/active-directory/) | - | |
-| [Microsoft Entra Domain Services](/azure/active-directory-domain-services/) | Yes | |
-| **Integration** | | |
-| [Service Bus](/azure/service-bus-messaging/) | Yes | |
-| [Event Grid](/azure/event-grid/) | - | |
-| [API Management](/azure/api-management/) | - | |
-| **IoT Services** | | |
-| [IoT Hub](/azure/iot-hub/) | Yes | |
-| [IoT Hub Device Provisioning](/azure/iot-dps/) | Yes | |
-| **Management and Governance** | | |
-| [Azure Managed Grafana](/azure/managed-grafana/) | - | |
-| [Azure Site Recovery](/azure/site-recovery/) | - | |
-| [Azure Migrate](/azure/migrate/) | Yes | |
-| **Media** | | |
-| [Media Services](/azure/media-services/) | Yes | |
-| **Security** | | |
-| [Microsoft Defender for IoT](/azure/defender-for-iot/) | Yes | |
-| [Microsoft Sentinel](/azure/sentinel/) | Yes, including Managed HSM | |
-| **Storage** | | |
-| [Blob Storage](/azure/storage/blobs/) | Yes, including Managed HSM | |
-| [Premium Blob Storage](/azure/storage/blobs/) | Yes, including Managed HSM | |
-| [Disk Storage](/azure/virtual-machines/disks-types/) | Yes, including Managed HSM | |
-| [Ultra Disk Storage](/azure/virtual-machines/disks-types/) | Yes, including Managed HSM | |
-| [Managed Disk Storage](/azure/virtual-machines/disks-types/) | Yes, including Managed HSM | |
-| [File Storage](/azure/storage/files/) | Yes, including Managed HSM | |
-| [File Premium Storage](/azure/storage/files/) | Yes, including Managed HSM | |
-| [File Sync](/azure/storage/file-sync/file-sync-introduction) | Yes, including Managed HSM | |
-| [Queue Storage](/azure/storage/queues/) | Yes, including Managed HSM | |
-| [Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction/) | Yes, including Managed HSM | |
-| [Avere vFXT](/azure/avere-vfxt/) | - | |
-| [Azure Cache for Redis](/azure/azure-cache-for-redis/) | Yes\*\*\*, including Managed HSM | |
-| [Azure NetApp Files](/azure/azure-netapp-files/) | Yes, including Managed HSM | |
-| [Archive Storage](/azure/storage/blobs/archive-blob) | Yes | |
-| [StorSimple](/azure/storsimple/) | Yes | |
-| [Azure Backup](/azure/backup/) | Yes, including Managed HSM | |
-| [Data Box](/azure/databox/) | - | |
-| [Azure Stack Edge](/azure/databox-online/azure-stack-edge-overview/) | Yes | |
-| **Other** | | |
-| [Azure Data Manager for Energy](/azure/energy-data-services/overview-microsoft-energy-data-services) | Yes | |
+## Services supporting customer managed keys (CMKs)
+
+Here are the services that support server-side encryption using customer managed keys:
+
+| Product, Feature, or Service | Key Vault | Managed HSM | Documentation |
+| | | | |
+| **AI and Machine Learning** | | | |
+| [Azure AI Search](/azure/search/) | Yes | | |
+| [Azure AI services](/azure/cognitive-services/) | Yes | Yes | |
+| [Azure AI Studio](/azure/ai-studio) | Yes | | [CMKs for encryption](/azure/ai-studio/concepts/encryption-keys-portal) |
+| [Azure Machine Learning](/azure/machine-learning/) | Yes | | |
+| [Azure OpenAI](/azure/ai-services/openai/) | Yes | Yes | |
+| [Content Moderator](/azure/cognitive-services/content-moderator/) | Yes | Yes | |
+| [Dataverse](/powerapps/maker/data-platform/) | Yes | Yes | |
+| [Dynamics 365](/dynamics365/) | Yes | Yes | |
+| [Face](/azure/cognitive-services/face/) | Yes | Yes | |
+| [Language Understanding](/azure/cognitive-services/luis/) | Yes | Yes | |
+| [Personalizer](/azure/cognitive-services/personalizer/) | Yes | Yes | |
+| [Power Platform](/power-platform/) | Yes | Yes | |
+| [QnA Maker](/azure/cognitive-services/qnamaker/) | Yes | Yes | |
+| [Speech Services](/azure/cognitive-services/speech-service/) | Yes | Yes | |
+| [Translator Text](/azure/cognitive-services/translator/) | Yes | Yes | |
+| **Analytics** | | | |
+| [Azure Data Explorer](/azure/data-explorer/) | Yes | | |
+| [Azure Data Factory](/azure/data-factory/) | Yes | Yes | |
+| [Azure Data Lake Store](/azure/data-lake-store/) | Yes, RSA 2048-bit | | |
+| [Azure HDInsight](/azure/hdinsight/) | Yes | | |
+| [Azure Monitor Application Insights](/azure/azure-monitor/app/app-insights-overview) | Yes | | |
+| [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) | Yes | Yes | |
+| [Azure Stream Analytics](/azure/stream-analytics/) | Yes\*\* | Yes | |
+| [Event Hubs](/azure/event-hubs/) | Yes | | |
+| [Functions](/azure/azure-functions/) | Yes | | |
+| [Microsoft Fabric](/fabric) | Yes | | [CMK encryption](/fabric/security/security-scenario#customer-managed-key-cmk-encryption-and-microsoft-fabric) |
+| [Power BI Embedded](/power-bi) | Yes | | [BYOK for Power BI](/power-bi/enterprise/service-encryption-byok) |
+| **Containers** | | | |
+| [App Configuration](/azure/azure-app-configuration/) | Yes | | [Use CMKs to encrypt App Configuration data](/azure/azure-app-configuration/concept-customer-managed-keys) |
+| [Azure Kubernetes Service](/azure/aks/) | Yes | Yes | |
+| [Azure Red Hat OpenShift](/azure/openshift/) | Yes | | [CMK encryption](/azure/openshift/howto-byok) |
+| [Container Instances](/azure/container-instances/) | Yes | | |
+| [Container Registry](/azure/container-registry/) | Yes | | |
+| **Compute** | | | |
+| [App Service](/azure/app-service/) | Yes\*\* | Yes | |
+| [Automation](/azure/automation/) | Yes | | |
+| [Azure Functions](/azure/azure-functions/) | Yes\*\* | Yes | |
+| [Azure portal](/azure/azure-portal/) | Yes\*\* | Yes | |
+| [Azure VMware Solution](/azure/azure-vmware/) | Yes | Yes | |
+| [Azure-managed applications](/azure/azure-resource-manager/managed-applications/overview) | Yes\*\* | Yes | |
+| [Batch](/azure/batch/) | Yes | | [Configure CMKs](/azure/batch/batch-customer-managed-key) |
+| [Logic Apps](/azure/logic-apps/) | Yes | | |
+| [SAP HANA](/azure/sap/large-instances/hana-overview-architecture) | Yes | | |
+| [Service Bus](/azure/service-bus-messaging/) | Yes | | |
+| [Site Recovery](/azure/site-recovery/) | Yes | | |
+| [Virtual Machine Scale Set](/azure/virtual-machine-scale-sets/) | Yes | Yes | |
+| [Virtual Machines](/azure/virtual-machines/) | Yes | Yes | |
+| **Databases** | | | |
+| [Azure Cosmos DB](/azure/cosmos-db/) | Yes | Yes | [Configure CMKs (Key Vault)](/azure/cosmos-db/how-to-setup-cmk) and [Configure CMKs (Managed HSM)](/azure/cosmos-db/how-to-setup-customer-managed-keys-mhsm) |
+| [Azure Database for MySQL](/azure/mysql/) | Yes | Yes | |
+| [Azure Database for PostgreSQL](/azure/postgresql/) | Yes | Yes | |
+| [Azure Database Migration Service](/azure/dms/) | N/A\* | | |
+| [Azure Databricks](/azure/databricks/) | Yes | Yes | |
+| [Azure Managed Instance for Apache Cassandra](/azure/managed-instance-apache-cassandra/) | Yes | | [CMKs](/azure/managed-instance-apache-cassandra/customer-managed-keys) |
+| [Azure SQL Database](/azure/azure-sql/database/) | Yes, RSA 3072-bit | Yes | |
+| [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/) | Yes, RSA 3072-bit | Yes | |
+| [Azure Synapse Analytics (dedicated SQL pool (formerly SQL DW) only)](/azure/synapse-analytics/) | Yes, RSA 3072-bit | Yes | |
+| [SQL Server on Virtual Machines](/azure/virtual-machines/windows/sql/) | Yes | | |
+| [SQL Server Stretch Database](/sql/sql-server/stretch-database/) | Yes, RSA 3072-bit | | |
+| [Table Storage](/azure/storage/tables/) | Yes | | |
+| **Hybrid + multicloud** | | | |
+| [Azure Stack Edge](/azure/databox-online/) | Yes | | [Azure Stack Edge: Security baseline](/security/benchmark/azure/baselines/azure-stack-edge-security-baseline#dp-5-use-customer-managed-key-option-in-data-at-rest-encryption-when-required) |
+| **Identity** | | | |
+| [Microsoft Entra Domain Services](/azure/active-directory-domain-services/) | Yes | | |
+| **Integration** | | | |
+| [Azure Health Data Services](/azure/healthcare-apis/) | Yes | | [Configure CMKs for DICOM](/azure/healthcare-apis/dicom/configure-customer-managed-keys), [Configure CMKs for FHIR](/azure/healthcare-apis/fhir/configure-customer-managed-keys) |
+| [Service Bus](/azure/service-bus-messaging/) | Yes | | |
+| **IoT Services** | | | |
+| [IoT Hub](/azure/iot-hub/) | Yes | | |
+| [IoT Hub Device Provisioning](/azure/iot-dps/) | Yes | | |
+| **Management and Governance** | | | |
+| [Azure Migrate](/azure/migrate/) | Yes | | |
+| [Azure Monitor](/azure/azure-monitor) | Yes | | [CMKs](/azure/azure-monitor/logs/customer-managed-keys?tabs=portal) |
+| **Media** | | | |
+| [Media Services](/azure/media-services/) | Yes | | |
+| **Security** | | | |
+| [Microsoft Defender for Cloud](/azure/defender-for-cloud/) | Yes | | [Security baseline: CMKs](/security/benchmark/azure/baselines/microsoft-defender-for-cloud-security-baseline#dp-5-use-customer-managed-key-option-in-data-at-rest-encryption-when-required) |
+| [Microsoft Defender for IoT](/azure/defender-for-iot/) | Yes | | |
+| [Microsoft Sentinel](/azure/sentinel/) | Yes | Yes | |
+| **Storage** | | | |
+| [Archive Storage](/azure/storage/blobs/archive-blob) | Yes | | |
+| [Azure Backup](/azure/backup/) | Yes | Yes | |
+| [Azure Cache for Redis](/azure/azure-cache-for-redis/) | Yes\*\* | Yes | |
+| [Azure Managed Lustre](/azure/azure-managed-lustre/) | Yes | | [CMKs](/azure/azure-managed-lustre/customer-managed-encryption-keys) |
+| [Azure NetApp Files](/azure/azure-netapp-files/) | Yes | Yes | |
+| [Azure Stack Edge](/azure/databox-online/azure-stack-edge-overview/) | Yes | | |
+| [Blob Storage](/azure/storage/blobs/) | Yes | Yes | |
+| [Data Lake Storage Gen2](/azure/storage/blobs/data-lake-storage-introduction/) | Yes | Yes | |
+| [Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | |
+| [File Premium Storage](/azure/storage/files/) | Yes | Yes | |
+| [File Storage](/azure/storage/files/) | Yes | Yes | |
+| [File Sync](/azure/storage/file-sync/file-sync-introduction) | Yes | Yes | |
+| [Managed Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | |
+| [Premium Blob Storage](/azure/storage/blobs/) | Yes | Yes | |
+| [Queue Storage](/azure/storage/queues/) | Yes | Yes | |
+| [StorSimple](/azure/storsimple/) | Yes | | |
+| [Ultra Disk Storage](/azure/virtual-machines/disks-types/) | Yes | Yes | |
+| **Other** | | | |
+| [Azure Data Manager for Energy](/azure/energy-data-services/overview-microsoft-energy-data-services) | Yes | | |
\* This service doesn't persist data. Transient caches, if any, are encrypted with a Microsoft key.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.57](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5, SP6) | 9.63 | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150500.33.63-azure:5 <br> 5.14.21-150500.33.66-azure:5 <br> 6.4.0-150600.6-azure:6 <br>6.4.0-150600.8.11-azure:6 <br> 6.4.0-150600.8.5-azure:6 <br> 6.4.0-150600.8.8-azure:6 |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | 9.62 | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150500.33.54-azure:5 <br> 5.14.21-150500.33.57-azure:5 <br> 5.14.21-150500.33.60-azure:5 |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | [9.61](https://support.microsoft.com/topic/update-rollup-73-for-azure-site-recovery-d3845f1e-2454-4ae8-b058-c1fec6206698) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150500.33.37-azure <br> 5.14.21-150500.33.42-azure <br> 5.14.21-150500.33.48-azure:5 <br> 5.14.21-150500.33.51-azure:5 |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5, SP6) | 9.63 | All [stock SUSE 15 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150500.33.63-azure:5 <br> 5.14.21-150500.33.66-azure:5 <br> 6.4.0-150600.6-azure:6 <br>6.4.0-150600.8.11-azure:6 <br> 6.4.0-150600.8.5-azure:6 <br> 6.4.0-150600.8.8-azure:6 |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | 9.62 | All [stock SUSE 15 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150500.33.54-azure:5 <br> 5.14.21-150500.33.57-azure:5 <br> 5.14.21-150500.33.60-azure:5 |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | [9.61](https://support.microsoft.com/topic/update-rollup-73-for-azure-site-recovery-d3845f1e-2454-4ae8-b058-c1fec6206698) | All [stock SUSE 15 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150500.33.37-azure <br> 5.14.21-150500.33.42-azure <br> 5.14.21-150500.33.48-azure:5 <br> 5.14.21-150500.33.51-azure:5 |
SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | [9.60]() | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150500.33.29-azure <br> 5.14.21-150500.33.34-azure | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.72-azure:4 <br> 5.14.21-150500.33.23-azure:5 <br> 5.14.21-150500.33.26-azure:5 |
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Replication appliance / Configuration server** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
-[Rollup 76](https://support.microsoft.com/topic/update-rollup-75-for-azure-site-recovery-4884b937-8976-454a-9b80-57e0200eb2ec) | 9.63.7187.1 | 9.63.7187.1 | 9.63.7187.1 | 5.24.0902.11 | 2.0.9938.0
-[Rollup 75](https://support.microsoft.com/topic/update-rollup-75-for-azure-site-recovery-4884b937-8976-454a-9b80-57e0200eb2ec) | 9.62.7172.1 | 9.62.7172.1 | 9.62.7172.1 | 5.24.0814.2 | 2.0.9932.0
-[Rollup 74](https://support.microsoft.com/topic/update-rollup-74-for-azure-site-recovery-584e3586-4c55-4cc2-8b1c-63038b6b4464) | 9.62.7096.1 | 9.62.7096.1 | 9.62.7096.1 | 5.24.0614.1 | 2.0.9919.0
+[Rollup 76](https://support.microsoft.com/en-us/topic/update-rollup-76-for-azure-site-recovery-6ca6833a-5b0f-4bdf-9946-41cd0aa8d6e4) | NA | NA | 9.63.7187.1 | 5.24.0902.11 | 2.0.9938.0
+[Rollup 75](https://support.microsoft.com/topic/update-rollup-75-for-azure-site-recovery-4884b937-8976-454a-9b80-57e0200eb2ec) | NA | NA | 9.62.7172.1 | 5.24.0814.2 | 2.0.9932.0
+[Rollup 74](https://support.microsoft.com/topic/update-rollup-74-for-azure-site-recovery-584e3586-4c55-4cc2-8b1c-63038b6b4464) | NA | NA | 9.62.7096.1 | 5.24.0614.1 | 2.0.9919.0
[Rollup 73](https://support.microsoft.com/topic/update-rollup-73-for-azure-site-recovery-d3845f1e-2454-4ae8-b058-c1fec6206698) | 9.61.7016.1 | 9.61.7016.1 | 9.61.7016.1 | 5.24.0317.5 | 2.0.9917.0 [Rollup 72](https://support.microsoft.com/topic/update-rollup-72-for-azure-site-recovery-kb5036010-aba602a9-8590-4afe-ac8a-599141ec99a5) | 9.60.6956.1 | NA | 9.60.6956.1 | 5.24.0117.5 | 2.0.9917.0 [Rollup 71](https://support.microsoft.com/topic/update-rollup-71-for-azure-site-recovery-kb5035688-4df258c7-7143-43e7-9aa5-afeef9c26e1a) | 9.59.6930.1 | NA | 9.59.6930.1 | NA | NA
For Site Recovery components, we support N-4 versions, where N is the latest rel
### Update Rollup 76
-Update [rollup 76](https://support.microsoft.com/topic/update-rollup-75-for-azure-site-recovery-4884b937-8976-454a-9b80-57e0200eb2ec) provides the following updates:
+Update [rollup 76](https://support.microsoft.com/topic/update-rollup-76-for-azure-site-recovery-6ca6833a-5b0f-4bdf-9946-41cd0aa8d6e4) provides the following updates:
**Update** | **Details** |
spring-apps How To Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-powershell.md
The requirements for completing the steps in this article depend on your Azure s
* If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be billed. Select a specific subscription by using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet: ```azurepowershell-interactive
- Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
+ Set-AzContext -SubscriptionId <subscription-ID>
``` ## Create a resource group
storage-actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/overview.md
See these articles to learn how to define a storage task:
- [Create a storage task](storage-tasks/storage-task-create.md) - [Define storage task conditions and operations](storage-tasks/storage-task-conditions-operations-edit.md)-- [Properties, operators, and operations in storage task conditions](storage-tasks/storage-task-properties-operators-operations.md)
+- [Storage task conditions](storage-tasks/storage-task-conditions.md)
+- [Storage task operations](storage-tasks/storage-task-operations.md)
### Assign a storage task
storage-actions Storage Task Conditions Operations Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-conditions-operations-edit.md
You can use a visual editor to define the conditions and operations of a storage
An _operation_ is an action taken on each object that meets the conditions defined in the task. A _condition_ contains one or more conditional _clauses_. Each clause defines the relationship between a property and a value. To execute an operation defined in the storage task, the terms of that relationship must be met by each object.
-To learn more, see [Properties, operators, and operations in storage task conditions](storage-task-properties-operators-operations.md).
- ## Open the conditions editor Navigate to the storage task in the Azure portal and then under **Storage task management**, select **Conditions**.
To define a clause, choose a property, specify a value for that property, and th
#### Choose a property
-In the **Blob property** drop-down list, choose a property. See [Supported blob properties](storage-task-properties-operators-operations.md#supported-properties-in-a-clause).
+In the **Blob property** drop-down list, choose a property. See [Supported blob properties](storage-task-conditions.md#supported-properties).
The following example selects the **Blob name** property.
The following example selects the **Blob name** property.
#### Choose a value and operator
-In the **Property value** box, enter a value and in the **Operator** drop-down list, choose an operator. See [Supported Operators](storage-task-properties-operators-operations.md#supported-operators-in-a-clause).
+In the **Property value** box, enter a value and in the **Operator** drop-down list, choose an operator. See [Supported Operators](storage-task-conditions.md#supported-operators).
The following example specifies a value of `.log` along with the **Ends with** operator. This condition allows the operation defined in this storage task to execute only on blobs that have a `.log` file extension.
To add an operation, select **Add new operation**, and to remove an operation, s
#### Choose an operation
-In the **Operation** drop-down list, choose an operation. See [Supported operations](storage-task-properties-operators-operations.md#supported-operations).
+In the **Operation** drop-down list, choose an operation. See [Supported operations](storage-task-operations.md#supported-operations).
The following example selects the **Set blob tags** property.
storage-actions Storage Task Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-conditions.md
+
+ Title: Storage task conditions
+
+description: Learn about the elements of a storage task condition.
+++++ Last updated : 10/24/2024++++
+# Storage task conditions
+
+A storage task contains a set of conditions and operations. This article describes the JSON format of a condition. Understanding that format is important if you plan to create a storage task by using a tool other than the Azure portal (For example: Azure PowerShell, or Azure CLI). This article also lists the properties and operators that you can use to compose the clauses of a condition.
+
+This article focuses on **conditions**. To learn more about **operations**, see [Storage task operations](storage-task-operations.md).
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Condition format
+
+A condition a collection of one or more _clauses_. Each clause contains a _property_, a _value_, and an _operator_. When the storage task runs, it uses the operator to compare a property with a value to determine whether a clause is met by the target object. In a clause, the **operator** always appears first followed by the **property**, and then the **value**. The following image shows how each element is positioned in the expression.
+
+> [!div class="mx-imgBorder"]
+> ![Diagram that shows the format of a simple condition with an operator, property, and value.](../media/storage-tasks/storage-task-conditions/storage-task-conditions-condition-format-basic.png)
+
+The following clause allows operations only on Microsoft Word documents. This clause targets all documents that end with the file extension `.docx`. Therefore, the operator is `endsWith`, the property is `Name`, and the value is `.docx`.
+
+```json
+{
+ "condition": "[[[endsWith(Name, '.docx')]]"
+}
+```
+
+For a complete list of operator and property names, see the [Supported operators](#supported-operators) and [Supported properties](#supported-properties) section of this article.
+
+### Multiple clauses in a condition
+
+A condition can contain multiple clauses separated by a comma along with either the string `and` or `or`. The string `and` targets objects that meet the criteria in all clauses in the condition while `or` targets objects that meet the criterion in any of the clauses in the condition. The following image shows the position of the `and` and `or` string along with two clauses.
+
+> [!div class="mx-imgBorder"]
+> ![Diagram that shows the format of a condition that contains two clauses.](../media/storage-tasks/storage-task-conditions/storage-task-conditions-condition-format-multiple.png)
+
+The following JSON shows a condition that contains two clauses. Because the `and` string is used in this expression, both clauses must evaluate to `true` before an operation is performed on the object.
+
+```json
+{
+"condition": "[[and(endsWith(Name, '.docx'), equals(Tags.Value[readyForLegalHold], 'Yes'))]]"
+}
+```
+
+### Groups of conditions
+
+Grouped clauses operate as a single unit separate from the rest of the clauses. Grouping clauses is similar to putting parentheses around a mathematical equation or logic expression. The `and` or `or` string for the first clause in the group applies to the whole group.
+
+ The following image shows two clauses grouped together.
+
+> [!div class="mx-imgBorder"]
+> ![Diagram that shows the format of a condition that contains two clauses grouped together.](../media/storage-tasks/storage-task-conditions/storage-task-conditions-condition-format-groups.png)
+
+The following condition allows operations only on Microsoft Word documents where the `readyForLegalHold` tag of the document is set to a value of `Yes`. Operations are also performed on objects that are greater than 100 bytes even if the other two conditions aren't true.
+
+```json
+{
+"condition": "[[[or(and(endsWith(Name, '.docx'), equals(Tags.Value[readyForLegalHold], 'Yes')), greater(Content-Length, '100'))]]"
+}
+```
+
+## Code view in the Azure portal
+
+The visual editor available in the Azure portal, can generate the JSON of a condition for you. You can define your conditions by using the editor, and then obtain the JSON expression by opening **Code** tab. This approach can be useful when creating complicated sets of conditions as JSON expressions can become large, unwieldy, and difficult to create by hand. The following image shows the **Code** tab in the visual editor.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the condition JSON as it appears in the Code tab of the visual designer.](../media/storage-tasks/storage-task-conditions/storage-task-conditions-code-tab.png)
+
+To learn more about the visual editor, see [Define storage task conditions and operations](storage-task-conditions-operations-edit.md).
+
+## Supported properties
+
+The following table shows the properties that you can use to compose each clause of a condition. A clause can contain string, boolean, numeric, and date and time properties.
+
+| String | Date and time<sup>3</sup> | Numeric | Boolean |
+|--||-||
+| AccessTier<sup>1</sup> | AccessTierChangeTime | Content-Length | Deleted |
+| Metadata.Value | Creation-Time | TagCount | IsCurrentVersion |
+| Name | DeletedTime | | |
+| BlobType<sup>2</sup> | LastAccessTime | | |
+| Container.Metadata.Value[Name] | Last-Modified | | |
+| Container.Name | | | |
+| Container.Metadata.Value[Name] | | | |
+| Container.Name | | | |
+| Tags.Value[Name] | | | |
+| VersionId | | | |
+
+<sup>1</sup> Allowed values are `Hot`, `Cool`, or `Archive`.
+
+<sup>2</sup> Allowed values are `BlockBlob`, `PageBlob`, or `AppendBlob`
+
+<sup>3</sup> Can be set to a specific time or to a metadata value dynamically obtained from objects. See [Reference a value from object metadata](storage-task-conditions-operations-edit.md#reference-a-value-from-object-metadata).
+
+## Supported operators
+
+The following table shows the operators that you can use in a clause to evaluate the value of each type of property.
+
+| String | Date and time | Numeric | Boolean |
+|||||
+| contains | equals |equals | equals |
+| empty | greater | greater | not |
+| equals | greaterOrEquals |greaterOrEquals ||
+| endsWith | less | less ||
+| length | lessOrEquals | lessOrEquals ||
+| startsWith | addToTime | ||
+| Matches | | ||
+
+## See also
+
+- [Storage task operations](storage-task-operations.md)
+- [Define conditions and operations](storage-task-conditions-operations-edit.md)
storage-actions Storage Task Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-create.md
On the **Basics** tab, provide the essential information for your storage task.
|--|--|--|--| | Project details | Subscription | Required | Select the subscription for the new storage task. | | Project details | Resource group | Required | Create a new resource group for this storage task, or select an existing one. For more information, see [Resource groups](../../azure-resource-manager/management/overview.md#resource-groups). |
-| Instance details | Storage task name | Required | Choose a unique name for your storage task. storage task names must be between 3 and 18 characters in length and might contain only lowercase letters and numbers. |
+| Instance details | Storage task name | Required | Choose a unique name for your storage task. Storage task names must be between 3 and 18 characters in length and might contain only lowercase letters and numbers. |
| Instance details | Region | Required | Select the appropriate region for your storage task. For more information, see [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md). | The following image shows an example of the **Basics** tab.
The following table describes the fields on the **Conditions** tab.
| Section | Field | Required or optional | Description | |--|--|--|--| | If | And/Or | Required | An operator that combines two or more predicates to form a logical-AND or logical-OR expression. |
-| If | Blob property | Required | The blob or container property that you like to use in the clause. See [Supported blob properties](storage-task-properties-operators-operations.md#supported-properties-in-a-clause)|
-| If | Operator | Required | The operator that defines how each property in the clause must relate to the corresponding value. See [Supported operators](storage-task-properties-operators-operations.md#supported-operators-in-a-clause)|
+| If | Blob property | Required | The blob or container property that you like to use in the clause. See [Supported blob properties](storage-task-conditions.md#supported-properties)|
+| If | Operator | Required | The operator that defines how each property in the clause must relate to the corresponding value. See [Supported operators](storage-task-conditions.md#supported-operators)|
| If| Property value | Required | The value that relates to the corresponding property. |
-| Then | Operations | Required | The action to perform when objects meet the conditions defined in this task. See [Supported operations](storage-task-properties-operators-operations.md#supported-operations)|
+| Then | Operations | Required | The action to perform when objects meet the conditions defined in this task. See [Supported operations](storage-task-operations.md#supported-operations)|
| Then | Parameter | Required | A value used by the operation. | The following image shows an example of the **Conditions** tab.
The following image shows an example of the **Conditions** tab.
> [!div class="mx-imgBorder"] > ![Screenshot of conditions tab of the storage task create experience.](../media/storage-tasks/storage-task-create/storage-task-conditions-tab.png)
-To learn more about supported properties and operators in conditions, see [Storage task conditions and operations](storage-task-properties-operators-operations.md).
- ## Assignments tab An _assignment_ identifies a storage account and a subset of objects in that account that the task will target. An assignment also defines when the task runs and where execution reports are stored.
storage-actions Storage Task Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-operations.md
+
+ Title: Storage task operations
+
+description: Learn about the elements of a storage task operation.
++++ Last updated : 10/24/2024++++
+# Storage task operations
+
+A storage task contains a set of conditions and operations. An operation is an action that a storage task performs on each object that meets the requirements of each condition. This article describes the JSON format of a storage task operation. Understanding that format is important if you plan to create a storage task by using a tool other than the Azure portal (For example: Azure PowerShell, or Azure CLI). This article also lists the operations, operation parameters, and the allowable values of each parameter.
+
+This article focuses on **operations**. To learn more about **conditions**, see [Storage task conditions](storage-task-conditions.md).
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Operation format
+
+An operation has a name along with zero, one, or multiple parameters. The following image shows how these elements appear for an operation in the JSON template of a storage task.
+
+> [!div class="mx-imgBorder"]
+> ![Diagram that show the format of an operation.](../media/storage-tasks/storage-task-operations/storage-task-operations-basic-structure.png)
+
+The following table describes each element.
+
+| Element | Description |
+||--|
+| `name` | The name of the operation.<sup>1</sup> |
+| `parameters` | A collection of one or more parameters. Each parameter has parameter name and a parameter value.<sup>1</sup> |
+| `onSuccess` | The action to take when the operation is successful for an object. `continue` is the only allowable value during the preview. |
+| `onFailure` | The action to take when the operation fails for an object. `break` is the only allowable value during the preview. |
+
+<sup>1</sup> For a complete list of operation names, operation parameters, and parameter values, see the [Supported operations](#supported-operations) section of this article.
+
+The following operation applies a time-based immutability policy to the object.
+
+```json
+{
+ "operations": [
+ {
+ "name": "SetBlobImmutabilityPolicy",
+ "parameters": {
+ "untilDate": "2024-11-15T21:54:22",
+ "mode": "locked"
+ },
+ "onSuccess": "continue",
+ "onFailure": "break"
+ }
+ ]
+}
+```
+
+### Multiple operations
+
+Separate multiple operations by using a comma. The following image shows the position of two operations in list of operations.
+
+> [!div class="mx-imgBorder"]
+> ![Diagram that shows the format of two operations.](../media/storage-tasks/storage-task-operations/storage-task-operations-mulitple-operations.png)
+
+The following JSON shows two operations separate by a comma.
+
+```json
+"operations": [
+ {
+ "name": "SetBlobImmutabilityPolicy",
+ "parameters": {
+ "untilDate": "2024-11-15T21:54:22",
+ "mode": "locked"
+ },
+ "onSuccess": "continue",
+ "onFailure": "break"
+ },
+ {
+ "name": "SetBlobTags",
+ "parameters": {
+ "ImmutabilityUpdatedBy": "contosoStorageTask"
+ },
+ "onSuccess": "continue",
+ "onFailure": "break"
+ }
+]
+```
+
+## Supported operations
+
+The following table shows the supported operations, parameters, and parameter values:
+
+| Operation | Parameters | Values |
+||-||
+| SetBlobTier | tier | Hot \| Cold \| Archive |
+| SetBlobExpiry | expiryTime, expiryOption |(expiryTime): Number of milliseconds<br>(expiryOption): Absolute \| NeverExpire \| RelativeToCreation \| RelativeToNow |
+| DeleteBlob | None | None |
+| UndeleteBlob | None | None |
+| SetBlobTags | Tag name<sup>1</sup> | Tag value |
+| SetBlobImmutabilityPolicy | untilDate, mode | (untilDate): DateTime of when policy ends<br><br>(mode): locked \| unlocked |
+| SetBlobLegalHold | legalHold | true \| false |
+
+<sup>1</sup> The name of this parameter is the name of the tag.
+
+## See also
+
+- [Storage task conditions](storage-task-conditions.md)
+- [Define conditions and operations](storage-task-conditions-operations-edit.md)
storage-actions Storage Task Properties Operators Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-properties-operators-operations.md
- Title: Properties, operators, and operations in storage task conditions-
-description: Learn about the elements of conditions and operations in a storage task.
------ Previously updated : 01/17/2024----
-# Properties, operators, and operations in storage task conditions
-
-This article describes the properties, operators, and operations that you can use to compose each storage task condition. To learn how to define conditions and operations, see [Define storage task conditions and operations](storage-task-conditions-operations-edit.md).
-
-> [!IMPORTANT]
-> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-## Supported properties in a clause
-
-The following table shows the properties that you can use to compose each clause of a condition. A clause can contain string, boolean, numeric, as well as date and time properties.
-
-| String | Date and time<sup>3</sup> | Numeric | Boolean |
-|--||-||
-| AccessTier<sup>1</sup> | AccessTierChangeTime | Content-Length | Deleted |
-| Metadata.Value | Creation-Time | TagCount | IsCurrentVersion |
-| Name | DeletedTime | | |
-| BlobType<sup>2</sup> | LastAccessTime | | |
-| Container.Metadata.Value[Name] | Last-Modified | | |
-| Container.Name | | | |
-| Container.Metadata.Value[Name] | | | |
-| Container.Name | | | |
-| Tags.Value[Name] | | | |
-| VersionId | | | |
-
-<sup>1</sup> Allowed values are `Hot`, `Cool`, or `Archive`.
-
-<sup>2</sup> Allowed values are `BlockBlob`, `PageBlob`, or `AppendBlob`
-
-<sup>3</sup> Can be set to a specific time or to a metadata value dynamically obtained from objects. See [Reference a value from object metadata](storage-task-conditions-operations-edit.md#reference-a-value-from-object-metadata).
-
-### Supported operators in a clause
-
-The following table shows the operators that you can use in a clause to evaluate the value of each type of property.
-
-| String | Date and time | Numeric | Boolean |
-|||||
-| contains | equals |equals | equals |
-| empty | greater | greater | not |
-| equals | greaterOrEquals |greaterOrEquals ||
-| endsWith | less | less ||
-| length | lessOrEquals | lessOrEquals ||
-| startsWith | addToTime | ||
-| Matches | | ||
-
-## Supported operations
-
-The following table shows the supported operations, parameters, and parameter values:
-
-| Operation | Parameters | Values |
-||-||
-| Set blob tier | Tier | Hot \| Cold \| Archive |
-| Set blob expiry | None | Absolute \| Never expire \| Relative to creation time \| Relative to current time |
-| Delete blob | None | None |
-| Undelete blob | None | None |
-| Set blob tags | TagSet | A fixed collection of up to 10 key-value pairs |
-| Set blob immutability policy | DateTime, string | DateTime of when policy ends, Locked \| Unlocked |
-| Set blob legal hold | Bool | True \| False |
-
-## See also
--- [Define conditions and operations](storage-task-conditions-operations-edit.md)
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
The first response returns the security principal, and the second returns the se
UserPrincipalName : markpdaniels@contoso.com ObjectType : User DisplayName : Mark P. Daniels
-Id : ab12cd34-ef56-ab12-cd34-ef56ab12cd34
+Id : aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
Type :
-ab12cd34-ef56-ab12-cd34-ef56ab12cd34
+aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
``` The `-RoleDefinitionName` parameter value is the name of the RBAC role that needs to be assigned to the principal. To access blob data in the Azure portal with Microsoft Entra credentials, a user must have the following role assignments:
The following example assigns the **Storage Blob Data Reader** role to a user by
<!-- replaycheck-task id="3361d580" --> ```powershell
-New-AzRoleAssignment -ObjectID "ab12cd34-ef56-ab12-cd34-ef56ab12cd34" `
+New-AzRoleAssignment -ObjectID "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb" `
-RoleDefinitionName "Storage Blob Data Reader" ` -Scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>" ```
The following example assigns the **Storage Blob Data Reader** role to a user by
```azurecli-interactive az role assignment create \ --role "Storage Blob Data Reader" \
- --assignee-object-id "ab12cd34-ef56-ab12-cd34-ef56ab12cd34" \
+ --assignee-object-id "aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb" \
--assignee-principal-type "User" \ --scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>" ```
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
A GUID is shown if the entry represents a user and that user doesn't exist in Mi
When you define ACLs for service principals, it's important to use the Object ID (OID) of the *service principal* for the app registration that you created. It's important to note that registered apps have a separate service principal in the specific Microsoft Entra tenant. Registered apps have an OID that's visible in the Azure portal, but the *service principal* has another (different) OID. Article
-To get the OID for the service principal that corresponds to an app registration, you can use the `az ad sp show` command. Specify the Application ID as the parameter. Here's an example of obtaining the OID for the service principal that corresponds to an app registration with App ID = 00001111-aaaa-2222-bbbb-3333cccc4444. Run the following command in the Azure CLI:
+To get the OID for the service principal that corresponds to an app registration, you can use the `az ad sp show` command. Specify the Application ID as the parameter. Here's an example of obtaining the OID for the service principal that corresponds to an app registration with App ID = ffffffff-eeee-dddd-cccc-bbbbbbbbbbb0. Run the following command in the Azure CLI:
```azurecli az ad sp show --id 18218b12-1895-43e9-ad80-6e8fc1ea88ce --query objectId
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Blob Storage lifecycle management policies to create auto
Previously updated : 09/30/2024 Last updated : 10/25/2024
When you configure or edit a lifecycle policy, it can take up to 24 hours for ch
If you disable a policy, then no new policy runs will be scheduled, but if a run is already in progress, that run will continue until it completes and you're billed for any actions that are required to complete the run. See [Regional availability and pricing](#regional-availability-and-pricing). ### Lifecycle policy completed event
+The `LifecyclePolicyCompleted` event is generated when the actions defined by a lifecycle management policy are performed. A summary section appears for each action that is included in the policy definition. The following json shows an example `LifecyclePolicyCompleted` event for a policy. Because the policy definition includes the `delete`, `tierToCool`, `tierToCold`, and `tierToArchive` actions, a summary section appears for each one.
-The `LifecyclePolicyCompleted` event is generated when the actions defined by a lifecycle management policy are performed. The following json shows an example `LifecyclePolicyCompleted` event.
```json {
The `LifecyclePolicyCompleted` event is generated when the actions defined by a
"successCount": 0, "errorList": "" },
+ "tierToColdSummary": {
+ "totalObjectsCount": 0,
+ "successCount": 0,
+ "errorList": ""
+ },
"tierToArchiveSummary": { "totalObjectsCount": 0, "successCount": 0,
The following table describes the schema of the `LifecyclePolicyCompleted` event
|scheduleTime|string|The time that the lifecycle policy was scheduled| |deleteSummary|vector\<byte\>|The results summary of blobs scheduled for delete operation| |tierToCoolSummary|vector\<byte\>|The results summary of blobs scheduled for tier-to-cool operation|
+|tierToColdSummary|vector\<byte\>|The results summary of blobs scheduled for tier-to-cold operation|
|tierToArchiveSummary|vector\<byte\>|The results summary of blobs scheduled for tier-to-archive operation| ## Examples of lifecycle policies
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
The following table summarizes the available attributes by source:
> | **Attribute source** | [Environment](../../role-based-access-control/conditions-format.md#environment-attributes) | > | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) | > | **Applies to** | For copy operations using the following REST operations, this attribute only applies to the destination storage account, and not the source:<br><br>[Copy Blob](/rest/api/storageservices/copy-blob)<br>[Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url)<br>[Put Blob From URL](/rest/api/storageservices/put-blob-from-url)<br>[Put Block From URL](/rest/api/storageservices/put-block-from-url)<br>[Append Block From URL](/rest/api/storageservices/append-block-from-url)<br>[Put Page From URL](/rest/api/storageservices/put-page-from-url)<br><br>For all other read, write, create, delete, and rename operations, it applies to the storage account that is the target of the operation |
-> | **Examples** | `@Environment[Microsoft.Network/privateEndpoints] StringEqualsIgnoreCase '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example-group/providers/Microsoft.Network/privateEndpoints/privateendpoint1'`<br/>[Example: Allow read access to a container only from a specific private endpoint](storage-auth-abac-examples.md#example-allow-access-to-a-container-only-from-a-specific-private-endpoint) |
+> | **Examples** | `@Environment[Microsoft.Network/privateEndpoints] StringEqualsIgnoreCase '/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/example-group/providers/Microsoft.Network/privateEndpoints/privateendpoint1'`<br/>[Example: Allow read access to a container only from a specific private endpoint](storage-auth-abac-examples.md#example-allow-access-to-a-container-only-from-a-specific-private-endpoint) |
> | **Learn more** | [Use private endpoints for Azure Storage](../common/storage-private-endpoints.md) | ### Snapshot
The following table summarizes the available attributes by source:
> | **Attribute source** | [Environment](../../role-based-access-control/conditions-format.md#environment-attributes) | > | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) | > | **Applies to** | For copy operations using the following REST operations, this attribute only applies to the destination storage account, and not the source:<br><br>[Copy Blob](/rest/api/storageservices/copy-blob)<br>[Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url)<br>[Put Blob From URL](/rest/api/storageservices/put-blob-from-url)<br>[Put Block From URL](/rest/api/storageservices/put-block-from-url)<br>[Append Block From URL](/rest/api/storageservices/append-block-from-url)<br>[Put Page From URL](/rest/api/storageservices/put-page-from-url)<br><br>For all other read, write, create, delete, and rename operations, it applies to the storage account that is the target of the operation |
-> | **Examples** | `@Environment[Microsoft.Network/virtualNetworks/subnets] StringEqualsIgnoreCase '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example-group/providers/Microsoft.Network/virtualNetworks/virtualnetwork1/subnets/default'`<br/>[Example: Allow access to blobs in specific containers from a specific subnet](storage-auth-abac-examples.md#example-allow-access-to-blobs-in-specific-containers-from-a-specific-subnet) |
+> | **Examples** | `@Environment[Microsoft.Network/virtualNetworks/subnets] StringEqualsIgnoreCase '/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/example-group/providers/Microsoft.Network/virtualNetworks/virtualnetwork1/subnets/default'`<br/>[Example: Allow access to blobs in specific containers from a specific subnet](storage-auth-abac-examples.md#example-allow-access-to-blobs-in-specific-containers-from-a-specific-subnet) |
> | **Learn more** | [Subnets](../../virtual-network/concepts-and-best-practices.md) | ### UTC now
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
Currently, these SKUs are generally available in a limited subset of regions:
- Australia Southeast - East Asia - Southeast Asia
+- West US 2
+- West Central US
### Provisioned v2 provisioning detail When you create a provisioned v2 file share, you specify the provisioned capacity for the file share in terms of storage, IOPS, and throughput. File shares are limited based on the following attributes:
storage Queues Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac-attributes.md
The following table summarizes the available attributes by source:
> | **Attribute** | `Microsoft.Network/privateEndpoints` | > | **Attribute source** | [Environment](../../role-based-access-control/conditions-format.md#environment-attributes) | > | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) |
-> | **Examples** | `@Environment[Microsoft.Network/privateEndpoints] StringEqualsIgnoreCase '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example-group/providers/Microsoft.Network/privateEndpoints/privateendpoint1'`<br/>[Example: Allow read access to a container only from a specific private endpoint](../blobs/storage-auth-abac-examples.md#example-allow-access-to-a-container-only-from-a-specific-private-endpoint) |
+> | **Examples** | `@Environment[Microsoft.Network/privateEndpoints] StringEqualsIgnoreCase '/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/example-group/providers/Microsoft.Network/privateEndpoints/privateendpoint1'`<br/>[Example: Allow read access to a container only from a specific private endpoint](../blobs/storage-auth-abac-examples.md#example-allow-access-to-a-container-only-from-a-specific-private-endpoint) |
> | **Learn more** | [Use private endpoints for Azure Storage](../common/storage-private-endpoints.md) | ### Queue name
The following table summarizes the available attributes by source:
> | **Attribute** | `Microsoft.Network/virtualNetworks/subnets` | > | **Attribute source** | [Environment](../../role-based-access-control/conditions-format.md#environment-attributes) | > | **Attribute type** | [String](../../role-based-access-control/conditions-format.md#string-comparison-operators) |
-> | **Examples** | `@Environment[Microsoft.Network/virtualNetworks/subnets] StringEqualsIgnoreCase '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/example-group/providers/Microsoft.Network/virtualNetworks/virtualnetwork1/subnets/default'`<br/>[Example: Allow access to blobs in specific containers from a specific subnet](../blobs/storage-auth-abac-examples.md#example-allow-access-to-blobs-in-specific-containers-from-a-specific-subnet) |
+> | **Examples** | `@Environment[Microsoft.Network/virtualNetworks/subnets] StringEqualsIgnoreCase '/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/example-group/providers/Microsoft.Network/virtualNetworks/virtualnetwork1/subnets/default'`<br/>[Example: Allow access to blobs in specific containers from a specific subnet](../blobs/storage-auth-abac-examples.md#example-allow-access-to-blobs-in-specific-containers-from-a-specific-subnet) |
> | **Learn more** | [Subnets](../../virtual-network/concepts-and-best-practices.md) | ### UTC now
The following table summarizes the available attributes by source:
- [Example Azure role assignment conditions](../blobs\storage-auth-abac-examples.md) - [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)-- [Troubleshoot Azure role assignment conditions](../../role-based-access-control/conditions-troubleshoot.md)
+- [Troubleshoot Azure role assignment conditions](../../role-based-access-control/conditions-troubleshoot.md)
stream-analytics Event Ordering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-ordering.md
These messages are shown to inform you that events have arrived late and are eit
Example of this message is: <br> <code> {"message Time":"2019-02-04 17:11:52Z","error":null,
-"message":"First Occurred: 02/04/2019 17:11:48 | Resource Name: ASAjob | Message: Source 'ASAjob' had 24 data errors of kind 'LateInputEvent' between processing times '2019-02-04T17:10:49.7250696Z' and '2019-02-04T17:11:48.7563961Z'. Input event with application timestamp '2019-02-04T17:05:51.6050000' and arrival time '2019-02-04T17:10:44.3090000' was sent later than configured tolerance.","type":"DiagnosticMessage","correlation ID":"49efa148-4asd-4fe0-869d-a40ba4d7ef3b"}
+"message":"First Occurred: 02/04/2019 17:11:48 | Resource Name: ASAjob | Message: Source 'ASAjob' had 24 data errors of kind 'LateInputEvent' between processing times '2019-02-04T17:10:49.7250696Z' and '2019-02-04T17:11:48.7563961Z'. Input event with application timestamp '2019-02-04T17:05:51.6050000' and arrival time '2019-02-04T17:10:44.3090000' was sent later than configured tolerance.","type":"DiagnosticMessage","correlation ID":"aaaa0000-bb11-2222-33cc-444444dddddd"}
</code> ## I see InputPartitionNotProgressing in my activity log
synapse-analytics How To Move Workspace From One Region To Another https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-move-workspace-from-one-region-to-another.md
The following PowerShell script adds the Synapse Administrator role assignment t
New-AzSynapseRoleAssignment ` -WorkspaceName $workspaceName ` -RoleDefinitionName "Synapse Administrator" `
- -ObjectId 1c02d2a6-ed3d-46ec-b578-6f36da5819c6
+ -ObjectId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
Get-AzSynapseRoleAssignment -WorkspaceName $workspaceName ```
synapse-analytics Quickstart Scale Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-powershell.md
HighAvailabilityReplicaCount :
CurrentBackupStorageRedundancy : Geo RequestedBackupStorageRedundancy : Geo SecondaryType :
-MaintenanceConfigurationId : /subscriptions/d8392f63-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/SQL_Default
+MaintenanceConfigurationId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/providers/Microsoft.Maintenance/publicMaintenanceConfigurations/SQL_Default
EnableLedger : False PreferredEnclaveType : PausedDate :
You have now learned how to scale compute for dedicated SQL pool (formerly SQL D
> [Load data into a dedicated SQL pool](load-data-from-azure-blob-storage-using-copy.md) - To get started with Azure Synapse Analytics, see [Get Started with Azure Synapse Analytics](../get-started.md).-- To learn more about dedicated SQL pools in Azure Synapse Analytics, see [What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?](sql-data-warehouse-overview-what-is.md)
+- To learn more about dedicated SQL pools in Azure Synapse Analytics, see [What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?](sql-data-warehouse-overview-what-is.md)
synapse-analytics Synapse Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-service-identity.md
PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups
}, "identity": { "type": "SystemAssigned",
- "principalId": "765ad4ab-XXXX-XXXX-XXXX-51ed985819dc",
- "tenantId": "72f988bf-XXXX-XXXX-XXXX-2d7cd011db47"
+ "principalId": "aaaaaaaa-bbbb-cccc-1111-222222222222",
+ "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee"
}, "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Synapse/workspaces/<workspaceName>", "type": "Microsoft.Synapse/workspaces",
PS C:\> (Get-AzSynapseWorkspace -ResourceGroupName <resourceGroupName> -Name <wo
IdentityType PrincipalId TenantId -- --
-SystemAssigned cadadb30-XXXX-XXXX-XXXX-ef3500e2ff05 72f988bf-XXXX-XXXX-XXXX-2d7cd011db47
+SystemAssigned aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb aaaabbbb-0000-cccc-1111-dddd2222eeee
``` You can get the application ID by copying above principal ID, then running below Microsoft Entra ID command with principal ID as parameter. ```powershell
-PS C:\> Get-AzADServicePrincipal -ObjectId cadadb30-XXXX-XXXX-XXXX-ef3500e2ff05
+PS C:\> Get-AzADServicePrincipal -ObjectId aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
-ServicePrincipalNames : {76f668b3-XXXX-XXXX-XXXX-1b3348c75e02, https://identity.azure.net/P86P8g6nt1QxfPJx22om8MOooMf/Ag0Qf/nnREppHkU=}
-ApplicationId : 76f668b3-XXXX-XXXX-XXXX-1b3348c75e02
+ServicePrincipalNames : {00001111-aaaa-2222-bbbb-3333cccc4444, https://identity.azure.net/P86P8g6nt1QxfPJx22om8MOooMf/Ag0Qf/nnREppHkU=}
+ApplicationId : 00001111-aaaa-2222-bbbb-3333cccc4444
DisplayName : <workspaceName>
-Id : cadadb30-XXXX-XXXX-XXXX-ef3500e2ff05
+Id : aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb
Type : ServicePrincipal ```
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
"name": "{workspaceName}", "identity": { "type": "SystemAssigned",
- "tenantId": "72f988bf-XXXX-XXXX-XXXX-2d7cd011db47",
- "principalId": "cadadb30-XXXX-XXXX-XXXX-ef3500e2ff05"
+ "tenantId": "aaaabbbb-0000-cccc-1111-dddd2222eeee",
+ "principalId": "aaaaaaaa-bbbb-cccc-1111-222222222222"
}, "tags": {} }