Updates from: 08/29/2022 01:05:52
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
Previously updated : 5/12/2021 Last updated : 08/28/2022 # Tutorial: Configure Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C
-In this sample tutorial, learn how to integrate [Microsoft Dynamics 365 Fraud Protection](/dynamics365/fraud-protection/overview) (DFP) with Azure Active Directory (AD) B2C.
+In this sample tutorial, learn how to integrate [Microsoft Dynamics 365 Fraud Protection](/dynamics365/fraud-protection) (DFP) with Azure Active Directory (AD) B2C.
Microsoft DFP provides organizations with the capability to assess the risk of attempts to create fraudulent accounts and log-ins. Microsoft DFP assessment can be used by the customer to block or challenge suspicious attempts to create new fake accounts or to compromise existing accounts.
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
Previously updated : 04/11/2022 Last updated : 08/26/2022
There are three primary components to provisioning users into an on-premises app
> Microsoft Identity Manager Synchronization isn't required. But you can use it to build and test your ECMA connector before you import it into the ECMA host.
+> [!VIDEO https://www.youtube.com/embed/QdfdpaFolys]
+ ### Firewall requirements You don't need to open inbound connections to the corporate network. The provisioning agents only use outbound connections to the provisioning service, which means there's no need to open firewall ports for incoming connections. You also don't need a perimeter (DMZ) network because all connections are outbound and take place over a secure channel.
When we think of traditional DNs in a traditional format, for say, Active Direct
`CN=Lola Jacobson,CN=Users,DC=contoso,DC=com`
-However, for a data source such as SQL, which is flat, not hierarchical, the DN needs to be either already present in one of the table or created from the information we provide to the ECMA Connector Host.
+However, for a data source such as SQL, which is flat, not hierarchical, the DN needs to be either already present in one of the tables or created from the information we provide to the ECMA Connector Host.
-This can be achieved by checking **Autogenerated** in the checkbox when configuring the genericSQL connector. When you choose DN to be autogenerated, the ECMA host will generate a DN in an LDAP format: CN=<anchorvalue>,OBJECT=<type>. This also assumes that DN is Anchor is **unchecked** in the Connectivity page.
+This can be achieved by checking **Autogenerated** in the checkbox when configuring the genericSQL connector. When you choose DN to be autogenerated, the ECMA host will generate a DN in an LDAP format: CN=<anchorvalue>,OBJECT=<type>. This also assumes that the DN is Anchor **unchecked** in the Connectivity page.
[![DN is Anchor unchecked](.\media\on-premises-application-provisioning-architecture\user-2.png)](.\media\on-premises-application-provisioning-architecture\user-2.png#lightbox)
Since ECMA Connector Host currently only supports the USER object type, the OBJE
You can define one or more matching attribute(s) and prioritize them based on the precedence. Should you want to change the matching attribute you can also do so. [![Matching attribute](.\media\on-premises-application-provisioning-architecture\match-1.png)](.\media\on-premises-application-provisioning-architecture\match-1.png#lightbox)
-2. ECMA Connector Host receives the GET request and queries its internal cache to see if the user exists and has based imported. This is done using the matching attribute(s) above. If you define multiple matching attributes, the Azure AD provisioning service will send a GET request for each attribute and the ECMA host will check it's cache for a match until it finds one.
+2. ECMA Connector Host receives the GET request and queries its internal cache to see if the user exists and has based imported. This is done using the matching attribute(s) above. If you define multiple matching attributes, the Azure AD provisioning service will send a GET request for each attribute and the ECMA host will check its cache for a match until it finds one.
3. If the user does not exist, Azure AD will make a POST request to create the user. The ECMA Connector Host will respond back to Azure AD with the HTTP 201 and provide an ID for the user. This ID is derived from the anchor value defined in the object types page. This anchor will be used by Azure AD to query the ECMA Connector Host for future and subsequent requests. 4. If a change happens to the user in Azure AD, then Azure AD will make a GET request to retrieve the user using the anchor from the previous step, rather than the matching attribute in step 1. This allows, for example, the UPN to change without breaking the link between the user in Azure AD and in the app. ## Agent best practices-- Using the same agent for the on-prem provisioning feature along with Workday / SuccessFactors / Azure AD Connect Cloud Sync is currently unsupported. We are actively working to support on-prem provisioning on the same agent as the other provisioning scenarios.
+- Using the same agent for the on-premises provisioning feature along with Workday / SuccessFactors / Azure AD Connect Cloud Sync is currently unsupported. We are actively working to support on-premises provisioning on the same agent as the other provisioning scenarios.
- The agent must communicate with both Azure and your application, so the placement of the agent affects the latency of those two connections. You can minimize the latency of the end-to-end traffic by optimizing each network connection. Each connection can be optimized by: - Reducing the distance between the two ends of the hop.
You can also check whether all the required ports are open.
- Microsoft Azure AD Connect Provisioning Agent Package ## Provisioning agent history
-This article lists the versions and features of Azure Active Directory Connect Provisioning Agent that have been released. The Azure AD team regularly updates the Provisioning Agent with new features and functionality. Please ensure that you do not use the same agent for on-prem provisioning and Cloud Sync / HR-driven provisioning.
+This article lists the versions and features of Azure Active Directory Connect Provisioning Agent that have been released. The Azure AD team regularly updates the Provisioning Agent with new features and functionality. Please ensure that you do not use the same agent for on-premises provisioning and Cloud Sync / HR-driven provisioning.
Microsoft provides direct support for the latest agent version and one version before.
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
The `alg` claim indicates the algorithm that was used to sign the token, while t
At any given point in time, Azure AD may sign an ID token using any one of a certain set of public-private key pairs. Azure AD rotates the possible set of keys on a periodic basis, so the application should be written to handle those key changes automatically. A reasonable frequency to check for updates to the public keys used by Azure AD is every 24 hours.
-Acquire the signing key data necessary to validate the signature by using the [OpenID Connect metadata document](v2-protocols-oidc.md#fetch-the-openid-connect-metadata-document) located at:
+Acquire the signing key data necessary to validate the signature by using the [OpenID Connect metadata document](v2-protocols-oidc.md#fetch-the-openid-configuration-document) located at:
``` https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
else
``` #### Validate token signing key
-Apps that have claims mapping enabled must validate their token signing keys by appending `appid={client_id}` to their [OpenID Connect metadata requests](v2-protocols-oidc.md#fetch-the-openid-connect-metadata-document). Below is the format of the OpenID Connect metadata document you should use:
+Apps that have claims mapping enabled must validate their token signing keys by appending `appid={client_id}` to their [OpenID Connect metadata requests](v2-protocols-oidc.md#fetch-the-openid-configuration-document). Below is the format of the OpenID Connect metadata document you should use:
``` https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration?appid={client-id}
active-directory Userinfo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md
Previously updated : 09/21/2020 Last updated : 08/26/2022
# Microsoft identity platform UserInfo endpoint
-The UserInfo endpoint is part of the [OpenID Connect standard](https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) (OIDC), designed to return claims about the user that authenticated. For the Microsoft identity platform, the UserInfo endpoint is hosted on Microsoft Graph (https://graph.microsoft.com/oidc/userinfo).
+Part of the OpenID Connect (OIDC) standard, the [UserInfo endpoint](https://openid.net/specs/openid-connect-core-1_0.html#UserInfo) is returns information about an authenticated user. In the Microsoft identity platform, the UserInfo endpoint is hosted by Microsoft Graph at https://graph.microsoft.com/oidc/userinfo.
## Find the .well-known configuration endpoint
-You can programmatically discover the UserInfo endpoint using the OpenID Connect discovery document, at `https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration`. ItΓÇÖs listed in the `userinfo_endpoint` field, and this pattern can be used across clouds to help point to the right endpoint. We do not recommend hard-coding the UserInfo endpoint in your app ΓÇô use the OIDC discovery document to find this endpoint at runtime instead.
+You can find the UserInfo endpoint programmatically by reading the `userinfo_endpoint` field of the OpenID configuration document at `https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration`. We don't recommend hard-coding the UserInfo endpoint in your applications. Instead, use the OIDC configuration document to find the endpoint at runtime.
-As part of the OpenID Connect specification, the UserInfo endpoint is often automatically called by [OIDC compliant libraries](https://openid.net/developers/certified/) to get information about the user. Without hosting such an endpoint, the Microsoft identity platform would not be standards compliant and some libraries would fail. From the [list of claims identified in the OIDC standard](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) we produce the name claims, subject claim, and email when available and consented for.
+The UserInfo endpoint is typically called automatically by [OIDC-compliant libraries](https://openid.net/developers/certified/) to get information about the user.From the [list of claims identified in the OIDC standard](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims), the Microsoft identity platform produces the name claims, subject claim, and email when available and consented to.
-## Consider: Use an ID Token instead
+## Consider using an ID token instead
-The information available in the ID token that your app can receive is a superset of the information it can get from the UserInfo endpoint. Because you can get an ID token at the same time you get a token to call the UserInfo endpoint, we suggest that you use that ID token to get information about the user instead of calling the UserInfo endpoint. Using the ID token will eliminate one to two network requests from your application launch, reducing latency in your application.
+The information in an ID token is a superset of the information available on UserInfo endpoint. Because you can get an ID token at the same time you get a token to call the UserInfo endpoint, we suggest getting the user's information from the token instead calling the UserInfo endpoint. Using the ID token instead of calling the UserInfo endpoint eliminates up to two network requests, reducing latency in your application.
-If you require more details about the user, you should call the [Microsoft Graph `/user` API](/graph/api/user-get) to get information like office number or job title. You can also use [optional claims](active-directory-optional-claims.md) to include additional user information in your ID and access tokens.
+If you require more details about the user like manager or job title, call the [Microsoft Graph `/user` API](/graph/api/user-get). You can also use [optional claims](active-directory-optional-claims.md) to include additional user information in your ID and access tokens.
## Calling the UserInfo endpoint
-UserInfo is a standard OAuth Bearer token API, called like any other Microsoft Graph API using the access token received when getting a token for Microsoft Graph. It returns a JSON response containing claims about the user.
+UserInfo is a standard OAuth bearer token API hosted by Microsoft Graph. Call the UserInfo endpoint as you would any Microsoft Graph API by using the access token your application received when it requested access to Microsoft Graph. The UserInfo endpoint returns a JSON response containing claims about the user.
### Permissions
-Use the following [OIDC permissions](v2-permissions-and-consent.md#openid-connect-scopes) to call the UserInfo API. `openid` is required, and the `profile` and `email` scopes ensure that additional information is provided in the response.
+Use the following [OIDC permissions](v2-permissions-and-consent.md#openid-connect-scopes) to call the UserInfo API. The `openid` claim is required, and the `profile` and `email` scopes ensure that additional information is provided in the response.
-|Permission type | Permissions |
-|:--|:|
-|Delegated (work or school account) | openid (required), profile, email |
-|Delegated (personal Microsoft account) | openid (required), profile, email |
-|Application | Not applicable |
+| Permission type | Permissions |
+|:|:-|
+| Delegated (work or school account) | `openid` (required), `profile`, `email` |
+| Delegated (personal Microsoft account) | `openid` (required), `profile`, `email` |
+| Application | Not applicable |
> [!TIP]
-> Copy this URL in your browser to get a token for the UserInfo endpoint as well as an [ID token](id-tokens.md) and replace the client ID and redirect URI with your own. Note that it only requests scopes for OpenID or Graph scopes, and nothing else. This is required, since you cannot request permissions for two different resources in the same token request.
+> Copy this URL in your browser to get an access token for the UserInfo endpoint and an [ID token](id-tokens.md). Replace the client ID and redirect URI with values from an app registration.
> > `https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=<yourClientID>&response_type=token+id_token&redirect_uri=<YourRedirectUri>&scope=user.read+openid+profile+email&response_mode=fragment&state=12345&nonce=678910` >
-> You can use this access token in the next section.
+> You can use the access token that's returned in the query in the next section.
-As with any other Microsoft Graph token, the token you receive here may not be a JWT. If you signed in a Microsoft account user, it will be an encrypted token format. This is because Microsoft Graph has a special token issuance pattern. This does not impact your ability to use the access token to call the UserInfo endpoint.
+Microsoft Graph uses a special token issuance pattern that may impact your app's ability to read or validate it. As with any other Microsoft Graph token, the token you receive here may not be a JWT and your app should consider it opaque. If you signed in a Microsoft account user, it will be an encrypted token format. None of these factors, however, impact your app's ability to use the access token in a request to the UserInfo endpoint.
### Calling the API
-The UserInfo API supports both GET and POST, per the OIDC spec.
+The UserInfo API supports both GET and POST requests.
```http GET or POST /oidc/userinfo HTTP/1.1
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJub25jZSI6Il…
} ```
-The claims listed here are all of the claims that the UserInfo endpoint can return. These are the same values that the app would see in the [ID token](id-tokens.md) issued to the app.
+The claims shown in the response are all those that the UserInfo endpoint can return. These values are the same values included in an [ID token](id-tokens.md).
## Notes and caveats on the UserInfo endpoint
-* If you want to call this UserInfo endpoint you must use the v2.0 endpoint. If you use the v1.0 endpoint you will get a token for the v1.0 UserInfo endpoint, hosted on login.microsoftonline.com. We recommend that all OIDC compliant apps and libraries use the v2.0 endpoint to ensure compatibility.
-* The response from the UserInfo endpoint cannot be customized. If youΓÇÖd like to customize claims, please use [claims mapping]( active-directory-claims-mapping.md) to edit the information returned in the tokens.
-* The response from the UserInfo endpoint cannot be added to. If youΓÇÖd like to get additional claims about the user, please use [optional claims]( active-directory-optional-claims.md) to add new claims to the tokens.
+You can't add to or customize the information returned by the UserInfo endpoint.
+
+To customize the information returned by the identity platform during authentication and authorization, use [claims mapping]( active-directory-claims-mapping.md) and [optional claims]( active-directory-optional-claims.md) to modify security token configuration.
## Next Steps
-* [Review the contents of ID tokens](id-tokens.md)
-* [Customize the contents of an ID token using optional claims](active-directory-optional-claims.md)
-* [Request an access token and ID token using the OAuth2 protocol](v2-protocols-oidc.md)
+* [Review the contents of ID tokens](id-tokens.md).
+* [Customize the contents of an ID token using optional claims](active-directory-optional-claims.md).
+* [Request an access token and ID token using the OAuth 2 protocol](v2-protocols-oidc.md).
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-protocols-oidc.md
Title: Microsoft identity platform and OpenID Connect protocol
-description: Build web applications by using the Microsoft identity platform implementation of the OpenID Connect authentication protocol.
-
+ Title: OpenID Connect (OIDC) on the Microsoft identity platform
+description: Sign in Azure AD users by using the Microsoft identity platform's implementation of the OpenID Connect extension to OAuth 2.0.
-++ Last updated : 08/26/2022+ - Previously updated : 07/19/2021---
-# Microsoft identity platform and OpenID Connect protocol
+# OpenID Connect on the Microsoft identity platform
+
+OpenID Connect (OIDC) extends the OAuth 2.0 authorization protocol for use also as an authentication protocol. You can use OIDC to enable single sign-on (SSO) between your OAuth-enabled applications by using a security token called an *ID token*.
+
+The full specification for OIDC is available on the OpenID Foundation's website at [OpenID Connect Core 1.0 specification](https://openid.net/specs/openid-connect-core-1_0.html).
-OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to securely sign in a user to an application. When you use the Microsoft identity platform's implementation of OpenID Connect, you can add sign-in and API access to your apps. This article shows how to do this independent of language and describes how to send and receive HTTP messages without using any [Microsoft open-source libraries](reference-v2-libraries.md).
+## Protocol flow: Sign-in
-[OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.html) extends the OAuth 2.0 *authorization* protocol for use as an *authentication* protocol, so that you can do single sign-on using OAuth. OpenID Connect introduces the concept of an *ID token*, which is a security token that allows the client to verify the identity of the user. The ID token also gets basic profile information about the user. It also introduces the [UserInfo endpoint](userinfo.md), an API that returns information about the user.
+This diagram shows the basic OpenID Connect sign-in flow. The steps in the flow are described in more detail in later sections of the article.
+
+![Swim-lane diagram showing the OpenID Connect protocol's sign-in flow.](./media/v2-protocols-oidc/convergence-scenarios-webapp.svg)
[!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)]
-## Protocol diagram: Sign-in
+## Enable ID tokens
+
+The *ID token* introduced by OpenID Connect is issued by the authorization server (the Microsoft identity platform) when the client application requests one during user authentication. The ID token enables a client application to verify the identity of the user and to get other information (claims) about them.
+
+ID tokens aren't issued by default for an application registered with the Microsoft identity platform. Enable ID tokens for an app by using one of the following methods.
+
+To enable ID tokens for your app, navigate to the [Azure portal](https://portal.azure.com) and then:
+
+1. Select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Authentication**.
+1. Under **Implicit grant and hybrid flows**, select the **ID tokens (used for implicit and hybrid flows)** checkbox.
+
+Or:
+
+1. Select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Manifest**.
+1. Set `oauth2AllowIdTokenImplicitFlow` to `true` in the app registration's [application manifest](reference-app-manifest.md).
+
+If you forget to enable ID tokens for your app and you request one, the Microsoft identity platform returns an `unsupported_response` error similar to:
+
+> *The provided value for the input parameter 'response_type' isn't allowed for this client. Expected value is 'code'*.
-The most basic sign-in flow has the steps shown in the next diagram. Each step is described in detail in this article.
+Requesting an ID token by specifying a `response_type` of `id_token` is explained in [Send the sign-in request](#send-the-sign-in-request) later in the article.
-![OpenID Connect protocol: Sign-in](./media/v2-protocols-oidc/convergence-scenarios-webapp.svg)
+## Fetch the OpenID configuration document
-## Fetch the OpenID Connect metadata document
+OpenID providers like the Microsoft identity platform provide an [OpenID Provider Configuration Document](https://openid.net/specs/openid-connect-discovery-1_0.html) at a publicly accessible endpoint containing the provider's OIDC endpoints, supported claims, and other metadata. Client applications can use the metadata to discover the URLs to use for authentication and the authentication service's public signing keys, among other things.
-OpenID Connect describes a metadata document [(RFC)](https://openid.net/specs/openid-connect-discovery-1_0.html) that contains most of the information required for an app to do sign in. This includes information such as the URLs to use and the location of the service's public signing keys. You can find this document by appending the discovery document path to the authority URL:
+Authentication libraries are the most common consumers of the OpenID configuration document, which they use for discovery of authentication URLs, the provider's public signing keys, and other service metadata. If you use an authentication library in your app (recommended), you likely won't need to hand-code requests to and responses from the OpenID configuration document endpoint.
-Discovery document path: `/.well-known/openid-configuration`
+### Find your app's OpenID configuration document URI
-Authority: `https://login.microsoftonline.com/{tenant}/v2.0`
+Every app registration in Azure AD is provided a publicly accessible endpoint that serves its OpenID configuration document. To determine the URI of the configuration document's endpoint for your app, append the *well-known OpenID configuration* path to your app registration's *authority URL*.
-The `{tenant}` can take one of four values:
+* Well-known configuration document path: `/.well-known/openid-configuration`
+* Authority URL: `https://login.microsoftonline.com/{tenant}/v2.0`
+
+The value of `{tenant}` varies based on the application's sign-in audience as shown in the following table. The authority URL also varies by [cloud instance](authentication-national-cloud.md#azure-ad-authentication-endpoints).
| Value | Description | | | | | `common` |Users with both a personal Microsoft account and a work or school account from Azure AD can sign in to the application. | | `organizations` |Only users with work or school accounts from Azure AD can sign in to the application. | | `consumers` |Only users with a personal Microsoft account can sign in to the application. |
-| `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` | Only users from a specific Azure AD tenant (whether they are members in the directory with a work or school account, or they are guests in the directory with a personal Microsoft account) can sign in to the application. Either the friendly domain name of the Azure AD tenant or the tenant's GUID identifier can be used. You can also use the consumer tenant, `9188040d-6c67-4c5b-b112-36a304b66dad`, in place of the `consumers` tenant. |
+| `8eaef023-2b34-4da1-9baa-8bc8c9d6a490` or `contoso.onmicrosoft.com` | Only users from a specific Azure AD tenant (directory members with a work or school account or directory guests with a personal Microsoft account) can sign in to the application. <br/><br/>The value can be the domain name of the Azure AD tenant or the tenant ID in GUID format. You can also use the consumer tenant GUID, `9188040d-6c67-4c5b-b112-36a304b66dad`, in place of `consumers`. |
-The authority differs across national clouds - e.g. `https://login.microsoftonline.de` for the Azure AD Germany instance. If you do not use the public cloud, please review the [national cloud endpoints](authentication-national-cloud.md#azure-ad-authentication-endpoints) to find the appropriate one for you. Ensure that the tenant and `/v2.0/` are present in your request so you can use the v2.0 version of the endpoint.
+You can also find your app's OpenID configuration document URI in its app registration in the Azure portal.
-> [!TIP]
-> Try it! Click [https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration) to see the `common` configuration.
+To find the OIDC configuration document for your app, navigate to the [Azure portal](https://portal.azure.com) and then:
+
+1. Select **Azure Active Directory** > **App registrations** > *\<your application\>* > **Endpoints**.
+1. Locate the URI under **OpenID Connect metadata document**.
### Sample request
-To call the userinfo endpoint for the common authority on the public cloud, use the following:
+This request gets the OpenID configuration metadata from the `common` authority's OpenID configuration document endpoint on the Azure public cloud:
```http GET /common/v2.0/.well-known/openid-configuration Host: login.microsoftonline.com ```
+> [!TIP]
+> Try it! To see the OpenID configuration document for an application's `common` authority, navigate to [https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration](https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration).
+ ### Sample response
-The metadata is a simple JavaScript Object Notation (JSON) document. See the following snippet for an example. The contents are fully described in the [OpenID Connect specification](https://openid.net/specs/openid-connect-discovery-1_0.html#rfc.section.4.2).
+The configuration metadata is returned in JSON format as shown in the following example (truncated for brevity). The metadata returned in the JSON response is described in detail in the [OpenID Connect 1.0 discovery specification](https://openid.net/specs/openid-connect-discovery-1_0.html#rfc.section.4.2).
```json {
The metadata is a simple JavaScript Object Notation (JSON) document. See the fol
"pairwise" ], ...- } ```
-If your app has custom signing keys as a result of using the [claims-mapping](active-directory-claims-mapping.md) feature, you must append an `appid` query parameter containing the app ID in order to get a `jwks_uri` pointing to your app's signing key information. For example: `https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` contains a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/v2.0/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`.
-
-Typically, you would use this metadata document to configure an OpenID Connect library or SDK; the library would use the metadata to do its work. However, if you're not using a pre-built OpenID Connect library, you can follow the steps in the remainder of this article to do sign-in in a web app by using the Microsoft identity platform.
+<!-- UNCOMMENT WHEN THE EXAMPLE APP REGISTRATION IS RE-ENABLED -->
+<!-- If your app has custom signing keys as a result of using [claims mapping](active-directory-claims-mapping.md), append the `appid` query parameter to include the `jwks_uri` claim that includes your app's signing key information. For example, `https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` includes a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/v2.0/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`. -->
## Send the sign-in request
-When your web app needs to authenticate the user, it can direct the user to the `/authorize` endpoint. This request is similar to the first leg of the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md), with these important distinctions:
+To authenticate a user and request an ID token for use in your application, direct their user-agent to the Microsoft identity platform's _/authorize_ endpoint. The request is similar to the first leg of the [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md) but with these distinctions:
-* The request must include the `openid` scope in the `scope` parameter.
-* The `response_type` parameter must include `id_token`.
-* The request must include the `nonce` parameter.
+* Include the `openid` scope in the `scope` parameter.
+* Specify `id_token` or `code+id_token` in the `response_type` parameter.
+* Include the `nonce` parameter.
-> [!IMPORTANT]
-> In order to successfully request an ID token from the /authorization endpoint, the app registration in the [registration portal](https://portal.azure.com) must have the implicit grant of id_tokens enabled in the Authentication tab (which sets the `oauth2AllowIdTokenImplicitFlow` flag in the [application manifest](reference-app-manifest.md) to `true`). If it isn't enabled, an `unsupported_response` error will be returned: "The provided value for the input parameter 'response_type' isn't allowed for this client. Expected value is 'code'"
-
-For example:
+Example sign-in request (line breaks included only for readability):
```HTTP
-// Line breaks are for legibility only.
- GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize? client_id=6731de76-14a6-49ae-97bc-6eba6914391e &response_type=id_token
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `response_type` | Required | Must include `id_token` for OpenID Connect sign-in. It might also include other `response_type` values, such as `code`. | | `redirect_uri` | Recommended | The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except that it must be URL-encoded. If not present, the endpoint will pick one registered `redirect_uri` at random to send the user back to. | | `scope` | Required | A space-separated list of scopes. For OpenID Connect, it must include the scope `openid`, which translates to the **Sign you in** permission in the consent UI. You might also include other scopes in this request for requesting consent. |
-| `nonce` | Required | A value included in the request, generated by the app, that will be included in the resulting id_token value as a claim. The app can verify this value to mitigate token replay attacks. The value typically is a randomized, unique string that can be used to identify the origin of the request. |
+| `nonce` | Required | A value generated and sent by your app in its request for an ID token. The same `nonce` value is included in the ID token returned to your app by the Microsoft identity platform. To mitigate token replay attacks, your app should verify the `nonce` value in the ID token is the same value it sent when requesting the token. The value is typically a unique, random string. |
| `response_mode` | Recommended | Specifies the method that should be used to send the resulting authorization code back to your app. Can be `form_post` or `fragment`. For web applications, we recommend using `response_mode=form_post`, to ensure the most secure transfer of tokens to your application. | | `state` | Recommended | A value included in the request that also will be returned in the token response. It can be a string of any content you want. A randomly generated unique value typically is used to [prevent cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state also is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view the user was on. |
-| `prompt` | Optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `consent`, and `select_account`. The `prompt=login` claim forces the user to enter their credentials on that request, which negates single sign-on. The `prompt=none` parameter is the opposite, and should be paired with a `login_hint` to indicate which user must be signed in. These parameters ensure that the user isn't presented with any interactive prompt at all. If the request can't be completed silently via single sign-on (because no user is signed in, the hinted user isn't signed in, or there are multiple users signed in and no hint is provided), the Microsoft identity platform returns an error. The `prompt=consent` claim triggers the OAuth consent dialog after the user signs in. The dialog asks the user to grant permissions to the app. Finally, `select_account` shows the user an account selector, negating silent SSO but allowing the user to pick which account they intend to sign in with, without requiring credential entry. You cannot use `login_hint` and `select_account` together.|
+| `prompt` | Optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `consent`, and `select_account`. The `prompt=login` claim forces the user to enter their credentials on that request, which negates single sign-on. The `prompt=none` parameter is the opposite, and should be paired with a `login_hint` to indicate which user must be signed in. These parameters ensure that the user isn't presented with any interactive prompt at all. If the request can't be completed silently via single sign-on, the Microsoft identity platform returns an error. Causes include no signed-in user, the hinted user isn't signed in, or multiple users are signed in but no hint was provided. The `prompt=consent` claim triggers the OAuth consent dialog after the user signs in. The dialog asks the user to grant permissions to the app. Finally, `select_account` shows the user an account selector, negating silent SSO but allowing the user to pick which account they intend to sign in with, without requiring credential entry. You can't use both `login_hint` and `select_account`.|
| `login_hint` | Optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. | | `domain_hint` | Optional | The realm of the user in a federated directory. This skips the email-based discovery process that the user goes through on the sign-in page, for a slightly more streamlined user experience. For tenants that are federated through an on-premises directory like AD FS, this often results in a seamless sign-in because of the existing login session. |
After the user authenticates and grants consent, the Microsoft identity platform
### Successful response
-A successful response when you use `response_mode=form_post` looks like this:
+A successful response when you use `response_mode=form_post` is similar to:
```HTTP POST /myapp/ HTTP/1.1
id_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik1uQ19WWmNB...&state=12345
| Parameter | Description | | | |
-| `id_token` | The ID token that the app requested. You can use the `id_token` parameter to verify the user's identity and begin a session with the user. For more information about ID tokens and their contents, see the [`id_tokens` reference](id-tokens.md). |
+| `id_token` | The ID token that the app requested. You can use the `id_token` parameter to verify the user's identity and begin a session with the user. For more information about ID tokens and their contents, see the [ID token reference](id-tokens.md). |
| `state` | If a `state` parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | ### Error response
-Error responses might also be sent to the redirect URI so that the app can handle them. An error response looks like this:
+Error responses might also be sent to the redirect URI so the app can handle them, for example:
```HTTP POST /myapp/ HTTP/1.1
The following table describes error codes that can be returned in the `error` pa
| Error code | Description | Client action | | | | |
-| `invalid_request` | Protocol error, such as a missing, required parameter. |Fix and resubmit the request. This is a development error that typically is caught during initial testing. |
-| `unauthorized_client` | The client application can't request an authorization code. |This usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instructions to install the application and add it to Azure AD. |
+| `invalid_request` | Protocol error like a missing required parameter. |Fix and resubmit the request. This development error should be caught during application testing. |
+| `unauthorized_client` | The client application can't request an authorization code. |This error can occur when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instructions to install the application and add it to Azure AD. |
| `access_denied` | The resource owner denied consent. |The client application can notify the user that it can't proceed unless the user consents. |
-| `unsupported_response_type` |The authorization server does not support the response type in the request. |Fix and resubmit the request. This is a development error that typically is caught during initial testing. |
+| `unsupported_response_type` |The authorization server doesn't support the response type in the request. |Fix and resubmit the request. This development error should be caught during application testing. |
| `server_error` | The server encountered an unexpected error. |Retry the request. These errors can result from temporary conditions. The client application might explain to the user that its response is delayed because of a temporary error. | | `temporarily_unavailable` | The server is temporarily too busy to handle the request. |Retry the request. The client application might explain to the user that its response is delayed because of a temporary condition. |
-| `invalid_resource` | The target resource is invalid because either it does not exist, Azure AD can't find it, or it isn't correctly configured. |This indicates that the resource, if it exists, hasn't been configured in the tenant. The application can prompt the user with instructions for installing the application and adding it to Azure AD. |
+| `invalid_resource` | The target resource is invalid because it doesn't exist, Azure AD can't find it, or it's configured incorrectly. |This error indicates that the resource, if it exists, hasn't been configured in the tenant. The application can prompt the user with instructions for installing the application and adding it to Azure AD. |
## Validate the ID token
-Just receiving an id_token isn't always sufficient to authenticate the user; you may also need to validate the id_token's signature and verify the claims in the token per your app's requirements. Like all OIDC platforms, the Microsoft identity platform uses [JSON Web Tokens (JWTs)](https://tools.ietf.org/html/rfc7519) and public key cryptography to sign ID tokens and verify that they're valid.
+Receiving an ID token in your app might not always be sufficient to fully authenticate the user. You might also need to validate the ID token's signature and verify its claims per your app's requirements. Like all OpenID providers, the Microsoft identity platform's ID tokens are [JSON Web Tokens (JWTs)](https://tools.ietf.org/html/rfc7519) signed by using public key cryptography.
+
+Web apps and web APIs that use ID tokens for authorization must validate them because such applications gate access to data. Other types of application might not benefit from ID token validation, however. Native and single-page apps (SPAs), for example, rarely benefit from ID token validation because any entity with physical access to the device or browser can potentially bypass the validation.
+
+Two examples of token validation bypass are:
+
+* Providing fake tokens or keys by modifying network traffic to the device
+* Debugging the application and stepping over the validation logic during program execution.
-Not all apps benefit from verifying the ID token - native apps and single page apps, for instance, rarely benefit from validating the ID token. Someone with physical access to the device (or browser) can bypass the validation in many ways - from editing the web traffic to the device to provide fake tokens and keys to simply debugging the application to skip the validation logic. On the other hand, web apps and APIs using an ID token to authorization must validate the ID token carefully since they are gating access to data.
+If you validate ID tokens in your application, we recommend *not* doing so manually. Instead, use a token validation library to parse and validate tokens. Token validation libraries are available for most development languages, frameworks, and platforms.
-Once you've validated the signature of the id_token, there are a few claims you'll be required to verify. See the [`id_token` reference](id-tokens.md) for more information, including [Validating Tokens](id-tokens.md#validating-an-id-token) and [Important Information About Signing Key Rollover](active-directory-signing-key-rollover.md). We recommend making use of a library for parsing and validating tokens - there is at least one available for most languages and platforms.
+### What to validate in an ID token
-You may also wish to validate additional claims depending on your scenario. Some common validations include:
+In addition to validating ID token's signature, you should validate several of its claims as described in [Validating an ID token](id-tokens.md#validating-an-id-token) in the [ID token reference](id-tokens.md). Also see [Important information about signing key-rollover](active-directory-signing-key-rollover.md).
+
+Several other validations are common and vary by application scenario, including:
* Ensuring the user/organization has signed up for the app. * Ensuring the user has proper authorization/privileges * Ensuring a certain strength of authentication has occurred, such as [multi-factor authentication](../authentication/concept-mfa-howitworks.md).
-Once you have validated the id_token, you can begin a session with the user and use the claims in the id_token to obtain information about the user in your app. This information can be used for display, records, personalization, etc.
+Once you've validated the ID token, you can begin a session with the user and use the information in the token's claims for app personalization, display, or for storing their data.
## Protocol diagram: Access token acquisition
-Many web apps need to not only sign the user in, but also to access a web service on behalf of the user by using OAuth. This scenario combines OpenID Connect for user authentication while simultaneously getting an authorization code that you can use to get access tokens if you are using the OAuth authorization code flow.
+Many applications need not only to sign in a user, but also access a protected resource like a web API on behalf of the user. This scenario combines OpenID Connect to get an ID token for authenticating the user and OAuth 2.0 to get an access token for a protected resource.
-The full OpenID Connect sign-in and token acquisition flow looks similar to the next diagram. We describe each step in detail in the next sections of the article.
+The full OpenID Connect sign-in and token acquisition flow looks similar to this diagram:
![OpenID Connect protocol: Token acquisition](./media/v2-protocols-oidc/convergence-scenarios-webapp-webapi.svg)
-## Get an access token to call UserInfo
+## Get an access token for the UserInfo endpoint
+
+In addition to the ID token, the authenticated user's information is also made available at the OIDC [UserInfo endpoint](userinfo.md).
-To acquire a token for the OIDC UserInfo endpoint, modify the sign-in request:
+To get an access token for the OIDC UserInfo endpoint, modify the sign-in request as described here:
```HTTP // Line breaks are for legibility only. GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?
-client_id=6731de76-14a6-49ae-97bc-6eba6914391e // Your registered Application ID
-&response_type=id_token%20token // this will return both an id_token and an access token
-&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F // Your registered redirect URI, URL encoded
+client_id=6731de76-14a6-49ae-97bc-6eba6914391e // Your app registration's Application (client) ID
+&response_type=id_token%20token // Requests both an ID token and access token
+&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F // Your application's redirect URI (URL-encoded)
&response_mode=form_post // 'form_post' or 'fragment'
-&scope=openid+profile+email // `openid` is required. `profile` and `email` provide additional information in the UserInfo endpoint the same way they do in an ID token.
-&state=12345 // Any value, provided by your app
-&nonce=678910 // Any value, provided by your app
+&scope=openid+profile+email // 'openid' is required; 'profile' and 'email' provide information in the UserInfo endpoint as they do in an ID token.
+&state=12345 // Any value - provided by your app
+&nonce=678910 // Any value - provided by your app
```
-You can also use the [authorization code flow](v2-oauth2-auth-code-flow.md), the [device code flow](v2-oauth2-device-code.md), or a [refresh token](v2-oauth2-auth-code-flow.md#refresh-the-access-token) in place of `response_type=token` to get a token for your app.
+You can use the [authorization code flow](v2-oauth2-auth-code-flow.md), the [device code flow](v2-oauth2-device-code.md), or a [refresh token](v2-oauth2-auth-code-flow.md#refresh-the-access-token) in place of `response_type=token` to get an access token for your app.
+<!-- UNCOMMENT WHEN/IF THE TEST APP REGISTRATION IS RE-ENABLED -->
+<!--
> [!TIP] > Click the following link to execute this request. After you sign in, your browser is redirected to `https://localhost/myapp/`, with an ID token and a token in the address bar. Note that this request uses `response_mode=fragment` for demonstration purposes only - for a webapp we recommend using `form_post` for additional security where possible. > <a href="https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=id_token%20token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&response_mode=fragment&scope=openid+profile+email&state=12345&nonce=678910" target="_blank">https://login.microsoftonline.com/common/oauth2/v2.0/authorize...</a>
+-->
### Successful token response
-A successful response from using `response_mode=form_post` looks like this:
+A successful response from using `response_mode=form_post`:
```HTTP POST /myapp/ HTTP/1.1
Response parameters mean the same thing regardless of the flow used to acquire t
| `access_token` | The token that will be used to call the UserInfo endpoint.| | `token_type` | Always "Bearer" | | `expires_in`| How long until the access token expires, in seconds. |
-| `scope` | The permissions granted on the access token. Note that since the UserInfo endpoint is hosted on MS Graph, there may be additional Graph scopes listed here (e.g. user.read) if they were previously granted to the app. That's because a token for a given resource always includes every permission currently granted to the client. |
-| `id_token` | The ID token that the app requested. You can use the ID token to verify the user's identity and begin a session with the user. You'll find more details about ID tokens and their contents in the [`id_tokens` reference](id-tokens.md). |
+| `scope` | The permissions granted on the access token. Because the UserInfo endpoint is hosted on Microsoft Graph, it's possible for `scope` to contain others previously granted to the application (for example, `User.Read`). |
+| `id_token` | The ID token that the app requested. You can use the ID token to verify the user's identity and begin a session with the user. You'll find more details about ID tokens and their contents in the [ID token reference](id-tokens.md). |
| `state` | If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | [!INCLUDE [remind-not-to-validate-access-tokens](includes/remind-not-to-validate-access-tokens.md)] ### Error response
-Error responses might also be sent to the redirect URI so that the app can handle them appropriately. An error response looks like this:
+Error responses might also be sent to the redirect URI so that the app can handle them appropriately:
```HTTP POST /myapp/ HTTP/1.1
Review the [UserInfo documentation](userinfo.md#calling-the-api) to look over ho
## Send a sign-out request
-When you want to sign out the user from your app, it isn't sufficient to clear your app's cookies or otherwise end the user's session. You must also redirect the user to the Microsoft identity platform to sign out. If you don't do this, the user reauthenticates to your app without entering their credentials again, because they will have a valid single sign-in session with the Microsoft identity platform.
+To sign out a user, perform both of these operations:
+
+* Redirect the user's user-agent to the Microsoft identity platform's logout URI
+* Clear your app's cookies or otherwise end the user's session in your application.
+
+If you fail to perform either operation, the user may remain authenticated and not be prompted to sign-in the next time they user your app.
-You can redirect the user to the `end_session_endpoint` (which supports both HTTP GET and POST requests) listed in the OpenID Connect metadata document:
+Redirect the user-agent to the `end_session_endpoint` as shown in the OpenID Connect configuration document. The `end_session_endpoint` supports both HTTP GET and POST requests.
```HTTP GET https://login.microsoftonline.com/common/oauth2/v2.0/logout?
post_logout_redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
| Parameter | Condition | Description | | -- | - | | | `post_logout_redirect_uri` | Recommended | The URL that the user is redirected to after successfully signing out. If the parameter isn't included, the user is shown a generic message that's generated by the Microsoft identity platform. This URL must match one of the redirect URIs registered for your application in the app registration portal. |
-| `logout_hint` | Optional | Enables sign-out to occur without prompting the user to select an account. To use `logout_hint`, enable the `login_hint` [optional claim](active-directory-optional-claims.md) in your client application and use the value of the `login_hint` optional claim as the `logout_hint` parameter. Do not use UPNs or phone numbers as the value of the `logout_hint` parameter.
+| `logout_hint` | Optional | Enables sign-out to occur without prompting the user to select an account. To use `logout_hint`, enable the `login_hint` [optional claim](active-directory-optional-claims.md) in your client application and use the value of the `login_hint` optional claim as the `logout_hint` parameter. Don't use UPNs or phone numbers as the value of the `logout_hint` parameter.
## Single sign-out
When you redirect the user to the `end_session_endpoint`, the Microsoft identity
## Next steps
-* Review the [UserInfo documentation](userinfo.md)
-* Learn how to [customize the values in a token](active-directory-claims-mapping.md) with data from your on-premises systems.
-* Learn how to [include additional standard claims in tokens](active-directory-optional-claims.md).
+* Review the [UserInfo endpoint documentation](userinfo.md).
+* [Populate claim values in a token](active-directory-claims-mapping.md) with data from on-premises systems.
+* [Include your own claims in tokens](active-directory-optional-claims.md).
active-directory Howto Verifiable Credentials Partner Au10tix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md
+
+ Title: Configure Verified ID by AU10TIX as your Identity Verification Partner
+description: This article shows you the steps you need to follow to configure AU10TIX as your identity verification partner
++++++ Last updated : 08/26/2022+
+# Customer intent: As a developer, I'm looking for information about the open standards that are supported by Microsoft Entra Verified ID.
++
+# Configure Verified ID by AU10TIX as your Identity Verification Partner
+
+In this article, we cover the steps needed to integrate Microsoft Entra Verified ID with [AU10TIX](https://www.au10tix.com/). AU10TIX is a global leader in identity verification enabling companies to scale up their business by accelerating onboarding scenarios and ongoing verification throughout the customer lifecycle. It is an automated solution for the verification of ID documents + biometrics in 8 seconds or less. AU10TIX supports the verification of documents in over 190 countries reading documents in their regional languages.
+
+To learn more about AU10TIX and its complete set of solutions, visit https://www.au10tix.com/.
+
+## Prerequisites
+
+Before you can continue with the steps below you need to meet the following requirements:
+
+- A tenant [configured](verifiable-credentials-configure-tenant.md) for Entra Verified ID service.
+ - If you don't have an existing tenant, you can [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- You need to have completed the onboarding process with Au10tix.
+ - To create a AU10TIX account, submit the form on this [page](https://www.au10tix.com/solutions/microsoft-azure-active-directory-verifiable-credentials-program/).
++
+>[!IMPORTANT]
+> Before you proceed, you must have received the URL from Au10Tix for users to be issued Verified IDs. If you have not yet received it, follow up with Au10Tix before you attempt following the steps documented below.
+
+## Scenario description
+
+When onboarding users you can remove the need for error prone manual onboarding steps by using Verified ID with A10TIX account onboarding. Verified IDs can be used to digitally onboard employees, students, citizens, or others to securely access resources and services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a Verified ID to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a Verified ID to prove their identity and gain access.
+++++
+## Configure your Application to use AU10TIX Verified ID
+
+For incorporating identity verification into your Apps, using AU10TIX ΓÇ£Government Issued ID -GlobalΓÇ¥ Verified ID follow these steps:
+
+### Part 1
+
+As a developer you can share these steps with your tenant administrator to obtain the verification request URL, and body for your application or website to request Verified IDs from your users.
+
+1. Go to [Microsoft Entra portal -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
+
+ >[!NOTE]
+ > Make sure this is the tenant you set up for Verified ID per the pre-requisites.
+
+1. Go to QuickStart > Verification Request > [Start](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/QuickStartVerifierBlade)
+1. Choose **Select Issuer**.
+1. Look for AU10TIX in the **Search/select issuers** drop-down.
+ :::image type="content" source="media/verified-id-partner-au10tix/select-issuers.png" alt-text="Screenshot of the portal section used to choose issuers.":::
+1. Check the **Government Issued ID ΓÇô Global** or other credential type.
+1. Select **Add** and then select **Review**.
+1. Download the request body and Copy/paste POST API request URL.
+
+### Part 2
+
+As a developer you now have the request URL and body from your tenant admin, follow these steps to update your application or website:
+
+1. Add the request URL and body to your application or website to request Verified IDs from your users. Note: If you are using [one of the sample apps](https://aka.ms/vcsample), you'll need to replace the contents of the presentation_request_config.json with the request body obtained.
+1. Be sure to replace the values for the "url", "state", and "api-key" with your respective values.
+1. [Grant permissions](verifiable-credentials-configure-tenant.md#grant-permissions-to-get-access-tokens) to your app to obtain access token for the Verified ID service request service principal.
+
+## Test the user flow
+
+User flow is specific to your application or website. However if you are using one of the sample apps follow the steps outlined as part of the [sample app's documentation](https://aka.ms/vcsample).
+
+## Next steps
+
+- [Verifiable credentials admin API](admin-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
active-directory Howto Verifiable Credentials Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md
+
+ Title: Configure LexisNexis Risk Solutions as an identity verification partner using Verified ID
+description: This article shows you the steps you need to follow to configure LexisNexis as your identity verification partner
++++++ Last updated : 08/26/2022+
+# Customer intent: As a developer, I'm looking for information about the open standards that are supported by Microsoft Entra Verified ID.
++
+# Configure Verified ID with LexisNexis as your Identity Verification Partner
+
+You can use Entra Verified ID with LexisNexis Risk Solutions to enable faster onboarding by replacing some human interactions. Verifiable Credentials (VCs) can be used to onboard employees, students, citizens, or others to access services.
+
+## Prerequisites
+
+- A tenant [configured](verifiable-credentials-configure-tenant.md) for Entra Verified ID service.
+ - If you don't have an existing tenant, you can [create an Azure account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Your tenant should also have completed the LexisNexis onboarding process.
+ - Create a LexisNexis account, you can request a [demo](https://solutions.risk.lexisnexis.com/did-microsoft). Expect response from your LexisNexis Risk Solutions within 48 hours.
+
+>[!IMPORTANT]
+> Before you proceed, you must have received the URL from LexisNexis risk solutions for users to be issued Verified IDs. If you have not yet received it, follow up with LexisNexis before you attempt following the steps documented below.
+
+## Scenario description
+
+Verifiable Credentials can be used to onboard employees, students, citizens, or others to access services. For example, rather than an employee needing to go to a central office to activate an employee badge, they can use a verifiable credential to verify their identity to activate a badge that is delivered to them remotely. Rather than a citizen receiving a code they must redeem to access governmental services, they can use a VC to prove their identity and gain access.
+++
+## Configure your application to use LexisNexis
+
+To incorporate identity verification into your Apps using LexisNexis Verified ID, follow these steps.
+
+### Part 1
+
+As a developer you'll provide the steps below to your tenant administrator. The instructions help them obtain the verification request URL, and application body or website to request verifiable credentials from your users.
+
+1. Go to [Microsoft Entra portal -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade).
+ >[!Note]
+ > Make sure this is the tenant you set up for Verified ID per the pre-requisites.
+1. Go to [Quickstart-> Verification Request -> Start](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/QuickStartVerifierBlade).
+1. Select on **Select Issuer**.
+1. Look for LexisNexis in the Search/select issuers drop-down.
+
+ ![Screenshot of the select issuer section of the portal showing LexisNexis as the choice.](media/verified-id-partner-lexisnexis/select-issuer.png)
+
+1. Check the credential type you've discussed with LexisNexis Customer success manager for your specific needs.
+1. Choose **Add** and then choose **Review**.
+1. Download the request body and Copy/paste POST API request URL.
+
+### Part 2
+
+As a developer you now have the request URL and body from your tenant admin, follow these steps to update your application or website:
+
+1. Add the request URL and body to your application or website to request Verified IDs from your users.
+ >[!NOTE]
+ > If you are using [one of the sample apps](https://aka.ms/vcsample) you need to replace the contents of the presentation_request_config.json with the request body obtained.
+1. Replace the values for the "url", "state", and "api-key" with your respective values.
+1. Grant your app [permissions](verifiable-credentials-configure-tenant.md#grant-permissions-to-get-access-tokens) to obtain an access token for the Verified ID service request service principal.
+
+## Test the user flow
+
+User flow is specific to your application or website. However if you are using [one of the sample apps](https://aka.ms/vcsample) follow the steps here - [Run the test the sample app](https://aka.ms/vcsample)
+
+## Next steps
+
+- [Verifiable credentials admin API](admin-api.md)
+- [Request Service REST API issuance specification](issuance-request-api.md)
active-directory Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-gallery.md
+
+ Title: Identity Proofing and Verification (IDV) Partner gallery for Entra Verified ID
+description: Learn how to integrate with our IDV partners to tailor your end-user experience to your needs.
+++++ Last updated : 08/26/2022+++
+# Entra Verified ID IDV partners
+
+Our IDV partner network extends Microsoft Entra Verified ID's capabilities to help you build seamless end-user experiences. With Verified ID, you can integrate with IDV partners to enable remote onboarding using their identity verification and proofing services.
+
+To be considered into Entra Verified ID partner documentation, submit your application [request](https://aka.ms/isvconnectvc)
+
+## Partner list
+
+| IDV partner | Description | Integration walkthroughs |
+|:-|:--|:--|
+|![Screenshot of au10tix logo.](media/partner-gallery/au10tix.png) | [AU10TIX](https://www.au10tix.com/solutions/microsoft-azure-active-directory-verifiable-credentials-program) improves Verifiability While Protecting Privacy For Businesses, Employees, Contractors, Vendors, And Customers. | [Configure Verified ID by AU10TIX as your Identity Verification Partner](https://aka.ms/au10tixvc). |
+| ![Screenshot of a LexisNexis logo.](media/partner-gallery/lexisnexis.png) | [LexisNexis](https://solutions.risk.lexisnexis.com/did-microsoft) risk solutions Verifiable credentials enables faster onboarding for employees, students, citizens, or others to access services. | [Configure Verified ID by LexisNexis Risk Solutions as your Identity Verification Partner](https://aka.ms/lexisnexisvc). |
+| ![Screenshot of a Onfido logo.](media/partner-gallery/onfido.jpeg) | [Onfido](https://onfido.com/landing/onfido-microsoft-idv-service/) Start issuing and accepting verifiable credentials in minutes. With verifiable credentials and Onfido you can verify a personΓÇÖs identity while respecting privacy. Digitally validate information on a personΓÇÖs ID or their biometrics.| Not Available |
+| ![Screenshot of a Vu logo.](media/partner-gallery/vu.png) | [Vu Security](https://landings.vusecurity.com/microsoft-verifiable-credentials) Verifiable credentials with just a selfie and your ID.| Not Available |
+| ![Screenshot of a Jumio logo.](media/partner-gallery/jumio.jpeg) | [Jumio](https://www.jumio.com/microsoft-verifiable-credentials/) is helping to support a new form of digital identity by Microsoft based on verifiable credentials and decentralized identifiers standards to let consumers verify once and use everywhere.| Not Available |
+| ![Screenshot of a Idemia logo.](media/partner-gallery/idemia.png) | [Idemia](https://na.idemia.com/identity/verifiable-credentials/) Integration with Verified ID enables ΓÇ£Verify once, use everywhereΓÇ¥ functionality.| Not Available |
+| ![Screenshot of a Acuant logo.](media/partner-gallery/acuant.png) | [Acuant](https://www.acuant.com/microsoft-acuant-verifiable-credentials-my-digital-id/) - My Digital ID - Create Your Digital Identity Once, Use It Everywhere.| Not Available |
+| ![Screenshot of a Clear logo.](media/partner-gallery/clear.jpeg) | [Clear](https://ir.clearme.com/news-events/press-releases/detail/25/clear-collaborates-with-microsoft-to-create-more-secure) Collaborates with Microsoft to Create More Secure Digital Experience Through Verification Credential.| Not Available |
+
+## Next steps
+
+Select a partner in the tables mentioned to learn how to integrate their solution with your application.
active-directory Verifiable Credentials Configure Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-issuer.md
Previously updated : 08/16/2022 Last updated : 08/26/2022 # Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
The following diagram illustrates the Microsoft Entra Verified ID architecture a
- To clone the repository that hosts the sample app, install [GIT](https://git-scm.com/downloads). - [Visual Studio Code](https://code.visualstudio.com/Download), or similar code editor. - [.NET 5.0](https://dotnet.microsoft.com/download/dotnet/5.0).-- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization, please read this [FAQ](verifiable-credentials-faq.md#i-can-not-use-ngrok-what-do-i-do).
+- Download [ngrok](https://ngrok.com/) and sign up for a free account. If you can't use `ngrok` in your organization,read this [FAQ](verifiable-credentials-faq.md#i-can-not-use-ngrok-what-do-i-do).
- A mobile device with Microsoft Authenticator: - Android version 6.2206.3973 or later installed. - iOS version 6.6.2 or later installed.
In this step, you create the verified credential expert card by using Microsoft
``` 1. Copy the following JSON and paste it in the **Rules definition** textbox
- ```JSON
- {
- "attestations": {
- "idTokenHints": [
- {
- "mapping": [
- {
- "outputClaim": "firstName",
- "required": true,
- "inputClaim": "$.given_name",
- "indexed": false
- },
+
+ ```JSON
+ {
+ "attestations": {
+ "idTokenHints": [
{
- "outputClaim": "lastName",
- "required": true,
- "inputClaim": "$.family_name",
- "indexed": false
+ "mapping": [
+ {
+ "outputClaim": "firstName",
+ "required": true,
+ "inputClaim": "$.given_name",
+ "indexed": false
+ },
+ {
+ "outputClaim": "lastName",
+ "required": true,
+ "inputClaim": "$.family_name",
+ "indexed": false
+ }
+ ],
+ "required": false
} ],
- "required": false
+ "validityInterval": 2592000,
+ "vc": {
+ "type": [
+ "VerifiedCredentialExpert"
+ ]
+ }
}
- ],
- "validityInterval": 2592000,
- "vc": {
- "type": [
- "VerifiedCredentialExpert"
- ]
}
- }
- }
- ```
+ ```
1. Select **Create**.
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 3/12/2021 Last updated : 8/27/2022
You must have the following resource installed:
* A maximum of 200 pod identities are allowed for a cluster. * A maximum of 200 pod identity exceptions are allowed for a cluster. * Pod-managed identities are available on Linux node pools only.
+* This feature is only supported for Virtual Machine Scale Sets backed clusters.
### Register the `EnablePodIdentityPreview`
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
Azure provides two ways to implement Network Policy. You choose a Network Policy
* Azure's own implementation, called *Azure Network Policy Manager (NPM)*. * *Calico Network Policies*, an open-source network and network security solution founded by [Tigera][tigera].
-Azure NPM for Linux uses Linux *IPTables* and Azure NPM for Windows uses *Host Network Service (HNS) ACLPolicies* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable/HNS ACLPolicy filter rules.
+Azure NPM for Linux uses Linux *IPTables* to enforce the specified policies. Policies are translated into sets of allowed and disallowed IP pairs. These pairs are then programmed as IPTable filter rules.
## Differences between Azure NPM and Calico Network Policy and their capabilities | Capability | Azure NPM | Calico Network Policy | ||-|--|
-| Supported platforms | Linux, Windows Server 2022 | Linux, Windows Server 2019 and 2022 |
+| Supported platforms | Linux | Linux, Windows Server 2019 and 2022 |
| Supported networking options | Azure CNI | Azure CNI (Linux, Windows Server 2019 and 2022) and kubenet (Linux) | | Compliance with Kubernetes specification | All policy types supported | All policy types supported | | Additional features | None | Extended policy model consisting of Global Network Policy, Global Network Set, and Host Endpoint. For more information on using the `calicoctl` CLI to manage these extended features, see [calicoctl user reference][calicoctl]. | | Support | Supported by Azure support and Engineering team | Calico community support. For more information on additional paid support, see [Project Calico support options][calico-support]. | | Logging | Logs available with **kubectl log -n kube-system <network-policy-pod>** command | For more information, see [Calico component logs][calico-logs] |
-## Limitations:
-
-Azure Network Policy Manager(NPM) does not support IPv6. Otherwise, Azure NPM fully supports the network policy spec in Linux.
-* In Windows, Azure NPM does not support the following:
- * named ports
- * SCTP protocol
- * negative match label or namespace selectors (e.g. all labels except "debug=true")
- * "except" CIDR blocks (a CIDR with exceptions)
-
->[!NOTE]
-> * Azure NPM pod logs will record an error if an unsupported policy is created.
- ## Create an AKS cluster and enable Network Policy To see network policies in action, let's create an AKS cluster that supports network policy and then work on adding policies.
The following example script:
Instead of using a system-assigned identity, you can also use a user-assigned identity. For more information, see [Use managed identities](use-managed-identity.md).
-### Create an AKS cluster with Azure NPM enabled - Linux only
+### Create an AKS cluster with Azure NPM enabled
In this section, we will work on creating a cluster with Linux node pools and Azure NPM enabled.
az aks create \
--network-policy azure ```
-### Create an AKS cluster with Azure NPM enabled - Windows Server 2022 (Preview)
-
-In this section, we will work on creating a cluster with Windows node pools and Azure NPM enabled.
-
-Please execute the following commands prior to creating a cluster:
-
-```azurecli
- az extension add --name aks-preview
- az extension update --name aks-preview
- az feature register --namespace Microsoft.ContainerService --name AKSWindows2022Preview
- az feature register --namespace Microsoft.ContainerService --name WindowsNetworkPolicyPreview
- az provider register -n Microsoft.ContainerService
-```
-
-> [!NOTE]
-> At this time, Azure NPM with Windows nodes is available on Windows Server 2022 only
->
-
-Now, you should replace the values for *$RESOURCE_GROUP_NAME*, *$CLUSTER_NAME* and *$WINDOWS_USERNAME* variables.
-
-```azurecli-interactive
-$RESOURCE_GROUP_NAME=myResourceGroup-NP
-$CLUSTER_NAME=myAKSCluster
-$WINDOWS_USERNAME=myWindowsUserName
-$LOCATION=canadaeast
-```
-
-Create a username to use as administrator credentials for your Windows Server containers on your cluster. The following command prompts you for a username. Set it to `$WINDOWS_USERNAME`(remember that the commands in this article are entered into a BASH shell).
-
-```azurecli-interactive
-echo "Please enter the username to use as administrator credentials for Windows Server containers on your cluster: " && read WINDOWS_USERNAME
-```
-
-Use the following command to create a cluster :
-
-```azurecli
-az aks create \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $CLUSTER_NAME \
- --node-count 1 \
- --windows-admin-username $WINDOWS_USERNAME \
- --network-plugin azure \
- --network-policy azure
-```
-
-It takes a few minutes to create the cluster. By default, your cluster is created with only a Linux node pool. If you would like to use Windows node pools, you can add one. For example:
-
-```azurecli
-az aks nodepool add \
- --resource-group $RESOURCE_GROUP_NAME \
- --cluster-name $CLUSTER_NAME \
- --os-type Windows \
- --name npwin \
- --node-count 1
-```
-- ### Create an AKS cluster for Calico network policies Create the AKS cluster and specify *azure* for the network plugin, and *calico* for the Network Policy. Using *calico* as the Network Policy enables Calico networking on both Linux and Windows node pools.
automanage Quick Go Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/quick-go-sdk.md
+
+ Title: Azure Quickstart SDK for Go
+description: Create configuration profile assignments using the GO SDK for Automanage.
++++ Last updated : 08/24/2022+++
+# Quickstart: Enable Azure Automanage for virtual machines using GO
+
+Azure Automanage allows users to seamlessly apply Azure best practices to their virtual machines. This quickstart guide will help you apply a Best Practices Configuration profile to an existing virtual machine using the [azure-sdk-for-go repo](https://github.com/Azure/azure-sdk-for-go).
+
+## Prerequisites
+
+- An active [Azure Subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/)
+- An existing [Virtual Machine](../virtual-machines/windows/quick-create-portal.md)
+
+> [!NOTE]
+> Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-Go subscription.
+
+> [!IMPORTANT]
+> You need to have the **Contributor** role on the resource group containing your VMs to enable Automanage. If you are enabling Automanage for the first time on a subscription, you need the following permissions: **Owner** role or **Contributor** along with **User Access Administrator** roles on your subscription.
+
+## Install required packages
+
+For this demo, both the **Azure Identity** and **Azure Automanage** packages are required.
+
+```
+go get "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/automanage/armautomanage"
+go get "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+```
+
+## Import packages
+
+Import the **Azure Identity** and **Azure Automanage** packages into the script:
+
+```go
+import (
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/automanage/armautomanage"
+)
+```
+
+## Authenticate to Azure and create an Automanage client
+
+Use the **Azure Identity** package to authenticate to Azure and then create an Automanage Client:
+
+```go
+credential, err := azidentity.NewDefaultAzureCredential(nil)
+configProfilesClient, err := armautomanage.NewConfigurationProfilesClient("<subscription ID>", credential, nil)
+```
+
+## Enable best practices configuration profile to an existing virtual machine
+
+```go
+configProfileId := "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction"
+
+properties := armautomanage.ConfigurationProfileAssignmentProperties{
+ ConfigurationProfile: &configProfileId,
+}
+
+assignment := armautomanage.ConfigurationProfileAssignment{
+ Properties: &properties,
+}
+
+// assignment name must be 'default'
+newAssignment, err = assignmentClient.CreateOrUpdate(context.Background(), "default", "resourceGroupName", "vmName", assignment, nil)
+```
+
+## Next steps
+
+Learn how to conduct more operations with the GO Automanage Client by visiting the [azure-sdk-for-go repo](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/automanage/armautomanage/).
+
automanage Quick Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/quick-java-sdk.md
+
+ Title: Azure Quickstart SDK for Java
+description: Create configuration profile assignments using the Java SDK for Automanage.
++++ Last updated : 08/24/2022+++
+# Quickstart: Enable Azure Automanage for virtual machines using Java
+
+Azure Automanage allows users to seamlessly apply Azure best practices to their virtual machines. This quickstart guide will help you apply a Best Practices Configuration profile to an existing virtual machine using the [azure-sdk-for-java repo](https://github.com/Azure/azure-sdk-for-java).
+
+## Prerequisites
+
+- [Java Development Kit (JDK)](https://www.oracle.com/java/technologies/downloads/#java8) version 8+
+- An active [Azure Subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/)
+- An existing [Virtual Machine](../virtual-machines/windows/quick-create-portal.md)
+
+> [!NOTE]
+> Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-go subscription.
+
+> [!IMPORTANT]
+> You need to have the **Contributor** role on the resource group containing your VMs to enable Automanage. If you are enabling Automanage for the first time on a subscription, you need the following permissions: **Owner** role or **Contributor** along with **User Access Administrator** roles on your subscription.
+
+## Add required dependencies
+
+Add the **Azure Identity** and **Azure Automanage** dependencies to the `pom.xml`.
+
+```xml
+<!-- https://mvnrepository.com/artifact/com.azure/azure-identity -->
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.6.0-beta.1</version>
+ <scope>test</scope>
+</dependency>
+
+<!-- https://mvnrepository.com/artifact/com.azure.resourcemanager/azure-resourcemanager-automanage -->
+<dependency>
+ <groupId>com.azure.resourcemanager</groupId>
+ <artifactId>azure-resourcemanager-automanage</artifactId>
+ <version>1.0.0-beta.1</version>
+</dependency>
+```
+
+## Authenticate to Azure and create an Automanage client
+
+Use the **Azure Identity** package to authenticate to Azure and then create an Automanage Client:
+
+```java
+AzureProfile profile = new AzureProfile(AzureEnvironment.AZURE);
+TokenCredential credential = new DefaultAzureCredentialBuilder()
+ .authorityHost(profile.getEnvironment().getActiveDirectoryEndpoint())
+ .build();
+
+AutomanageManager client = AutomanageManager
+ .authenticate(credential, profile);
+```
+
+## Enable best practices configuration profile to an existing virtual machine
+
+```java
+String configProfile = "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction";
+
+client
+ .configurationProfileAssignments()
+ .define("default") // name must be default
+ .withExistingVirtualMachine("resourceGroupName", "vmName")
+ .withProperties(
+ new ConfigurationProfileAssignmentProperties()
+ .withConfigurationProfile(configProfile))
+ .create();
+```
+
+## Next steps
+
+Learn how to conduct more operations with the Java Automanage Client by visiting the [azure-sdk-for-java repo](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/automanage/azure-resourcemanager-automanage).
+
automanage Quick Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/quick-javascript-sdk.md
+
+ Title: Azure Quickstart SDK for JavaScript
+description: Create configuration profile assignments using the JavaScript SDK for Automanage.
++++ Last updated : 08/24/2022+++
+# Quickstart: Enable Azure Automanage for virtual machines using JavaScript
+
+Azure Automanage allows users to seamlessly apply Azure best practices to their virtual machines. This quickstart guide will help you apply a Best Practices Configuration profile to an existing virtual machine using the [azure-sdk-for-js repo](https://github.com/Azure/azure-sdk-for-js).
+
+## Prerequisites
+
+- An active [Azure Subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/)
+- An existing [Virtual Machine](../virtual-machines/windows/quick-create-portal.md)
+
+> [!NOTE]
+> Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-Go subscription.
+
+> [!IMPORTANT]
+> You need to have the **Contributor** role on the resource group containing your VMs to enable Automanage. If you are enabling Automanage for the first time on a subscription, you need the following permissions: **Owner** role or **Contributor** along with **User Access Administrator** roles on your subscription.
+
+## Install required packages
+
+For this demo, both the **Azure Identity** and **Azure Automanage** packages are required.
+
+```
+npm install @azure/arm-automanage
+npm install @azure/identity
+```
+
+## Import packages
+
+Import the **Azure Identity** and **Azure Automanage** packages into the script:
+
+```javascript
+const { AutomanageClient } = require("@azure/arm-automanage");
+const { DefaultAzureCredential } = require("@azure/identity");
+```
+
+## Authenticate to Azure and create an Automanage client
+
+Use the **Azure Identity** package to authenticate to Azure and then create an Automanage Client:
+
+```javascript
+const credential = new DefaultAzureCredential();
+const client = new AutomanageClient(credential, "<subscription ID>");
+```
+
+## Enable best practices configuration profile to an existing virtual machine
+
+```javascript
+let assignment = {
+ "properties": {
+ "configurationProfile": "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction"
+ }
+}
+
+// assignment name must be named "default"
+await client.configurationProfileAssignments.createOrUpdate("default", "resourceGroupName", "vmName", assignment);
+```
+
+## Next steps
+
+Learn how to conduct more operations with the JavaScript Automanage Client by visiting the [azure-sdk-for-js repo](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/automanage/arm-automanage).
+
automanage Quick Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/quick-python-sdk.md
+
+ Title: Azure Quickstart SDK for Python
+description: Create configuration profile assignments using the Python SDK for Automanage.
++++ Last updated : 08/24/2022+++
+# Quickstart: Enable Azure Automanage for virtual machines using Python
+
+Azure Automanage allows users to seamlessly apply Azure best practices to their virtual machines. This quickstart guide will help you apply a Best Practices Configuration profile to an existing virtual machine using the [azure-sdk-for-python repo](https://github.com/Azure/azure-sdk-for-python).
+
+## Prerequisites
+
+- An active [Azure Subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/)
+- An existing [Virtual Machine](../virtual-machines/windows/quick-create-portal.md)
+
+> [!NOTE]
+> Free trial accounts do not have access to the virtual machines used in this tutorial. Please upgrade to a Pay-As-You-Go subscription.
+
+> [!IMPORTANT]
+> You need to have the **Contributor** role on the resource group containing your VMs to enable Automanage. If you are enabling Automanage for the first time on a subscription, you need the following permissions: **Owner** role or **Contributor** along with **User Access Administrator** roles on your subscription.
+
+## Install required packages
+
+For this demo, both the **Azure Identity** and **Azure Automanage** packages are required.
+
+Use `pip` to install these packages:
+
+```
+pip install azure-identity
+pip install azure-mgmt-automanage
+```
+
+## Import packages
+
+Import the **Azure Identity** and **Azure Automanage** packages into the script:
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.automanage import AutomanageClient
+```
+
+## Authenticate to Azure and create an Automanage client
+
+Use the **Azure Identity** package to authenticate to Azure and then create an Automanage Client:
+
+```python
+credential = DefaultAzureCredential()
+client = AutomanageClient(credential, "<subscription ID>")
+```
+
+## Enable best practices configuration profile to an existing virtual machine
+
+```python
+assignment = {
+ "properties": {
+ "configurationProfile": "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction",
+ }
+}
+
+client.configuration_profile_assignments.create_or_update("default", "resourceGroupName", "vmName", assignment)
+```
+
+## Next steps
+
+Learn how to conduct more operations with the Automanage Client by visiting the [azure-samples-python-management repo](https://github.com/Azure-Samples/azure-samples-python-management/tree/main/samples/automanage).
+
automanage Reference Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/reference-sdk.md
+
+ Title: SDK Overview
+description: Get started with the Automanage SDKs.
++++ Last updated : 08/25/2022+++
+# Automanage SDK overview
+
+Azure Automanage currently supports the following SDKs:
+
+- [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/automanage/azure-mgmt-automanage)
+- [Go](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/automanage/armautomanage)
+- [Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/automanage/azure-resourcemanager-automanage)
+- [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/automanage/arm-automanage)
+- CSharp (pending)
+- PowerShell (pending)
+- Azure CLI (pending)
+- Terraform (pending)
+
+Here's a list of a few of the primary operations the SDKs provide:
+
+- Create custom configuration profiles
+- Delete custom configuration profiles
+- Create Best Practices profile assignments
+- Create custom profile assignments
+- Remove assignments
automanage Tutorial Create Assignment Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/tutorial-create-assignment-python.md
+
+ Title: Tutorial - python
+description: Create a virtual machine and assign an automanage best practices configuration profile to it.
++++ Last updated : 08/25/2022+++
+# Tutorial: Create a virtual machine and assign an Automanage profile to it
+
+In this tutorial, you'll create a resource group and a virtual machine. You'll then assign an Automanage Best Practices configuration profile to the new machine using the Python SDK.
+
+## Prerequisites
+
+- [Python](https://www.python.org/downloads/)
+- [Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli) or [Azure PowerShell](https://docs.microsoft.com/powershell/azure/install-az-ps)
+
+## Create resources
+
+### Sign in to Azure
+
+Sign in to Azure by using following command:
+
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az login
+```
+# [Azure PowerShell](#tab/azure-powershell)
+```azurepowershell
+Connect-AzAccount
+```
++
+### Create resource group
+
+Create a resource group:
+
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az group create --name "test-rg" --location "eastus"
+```
+# [Azure PowerShell](#tab/azure-powershell)
+```azurepowershell
+new-azResourceGroup -Name "test-rg" -Location "eastus"
+```
++
+### Create virtual machine
+
+Create a Windows virtual machine:
+
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az vm create `
+ --resource-group "test-rg" `
+ --name "testvm" `
+ --location "eastus" `
+ --image win2016datacenter `
+ --admin-username testUser `
+ --size Standard_D2s_v3 `
+ --storage-sku Standard_LRS
+```
+# [Azure PowerShell](#tab/azure-powershell)
+```azurepowershell
+New-AzVm `
+ -ResourceGroupName 'test-rg' `
+ -Name 'testvm' `
+ -Location 'eastus' `
+ -VirtualNetworkName 'testvm-vnet' `
+ -SubnetName 'testvm-subnet' `
+ -SecurityGroupName 'test-vm-nsg'
+```
++
+## Assign best practices profile to virtual machine
+
+Now that we've successfully created a resource group and a virtual machine, it's time to set up a Python project and assign an Automanage Best Practices configuration profile to the newly created virtual machine.
+
+### Install Python packages
+
+Install the Azure Identity and Azure Automanage packages using `pip`:
+
+```
+pip install azure-mgmt-automanage
+pip install azure-identity
+```
+
+### Import packages
+
+Create an `app.py` file and import the installed packages within it:
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.automanage import AutomanageClient
+```
+
+Set some local variables:
+
+```python
+sub = "<sub ID>"
+rg = "test-rg"
+vm = "testvm"
+```
+
+### Authenticate to Azure and create an Automanage client
+
+Use the **DefaultAzureCredential** within the `azure-identity` package to **authenticate** to Azure. Then, use the credential to create an **Automanage Client**.
+
+```python
+credential = DefaultAzureCredential()
+client = AutomanageClient(credential, sub)
+```
+
+### Create a best practices profile assignment
+
+Now we'll create an assignment between our new virtual machine and a Best Practices profile:
+
+```python
+assignment = {
+ "properties": {
+ "configurationProfile": "/providers/Microsoft.Automanage/bestPractices/AzureBestPracticesProduction",
+ }
+}
+
+# assignment name must be 'default'
+client.configuration_profile_assignments.create_or_update(
+ "default", rg, vm, assignment)
+```
+
+Run the python file:
+
+`python app.py`
++
+## View Assignment in the portal
+
+Navigate to the virtual machine and select the **Automanage** blade:
+![automanage blade](media/automanage-virtual-machines/automanage-blade.png)
+
+View the Automanage Profile now enabled on the virtual machine:
+![automanage vm](media/automanage-virtual-machines/automanage-vm.png)
+
+## Next steps
+
+For more information on the Automanage Python SDK, please visit the [azure-samples-python-management repo](https://github.com/Azure-Samples/azure-samples-python-management/tree/main/samples/automanage).
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
This article is a reference of the different applications and services that are
## Insights and curated visualizations
-Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as **curated visualizations** with the larger more complex of them being called **Insights**.
+Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as *curated visualizations* with the larger more complex of them being called *Insights*.
-The experiences collect and analyze a subset of logs and metrics and depending on the service and might also provide out-of-the-box alerting. They present this telemetry in a visual layout. The visualizations vary in size and scale. Some are considered part of Azure Monitor and follow the support and service level agreements for Azure. They are supported in all Azure regions where Azure Monitor is available. Other curated visualizations provide less functionality, might not scale, and might have different agreements. Some might be based solely on Azure Monitor Workbooks, while others might have an extensive custom experience.
+The experiences collect and analyze a subset of logs and metrics. Depending on the service, they might also provide out-of-the-box alerting. They present this telemetry in a visual layout. The visualizations vary in size and scale.
-The table below lists the available curated visualizations and more detailed information about them.
+Some visualizations are considered part of Azure Monitor and follow the support and service level agreements for Azure. They're supported in all Azure regions where Azure Monitor is available. Other curated visualizations provide less functionality, might not scale, and might have different agreements. Some might be based solely on Azure Monitor Workbooks, while others might have an extensive custom experience.
->[!NOTE]
-> Another type of older visualization called **monitoring solutions** are no longer in active development. The replacement technology is the Azure Monitor Insights mentioned above. We suggest you use the insights and not deploy new instances of solutions. For more information on the solutions, see [Monitoring solutions in Azure Monitor](./insights/solutions.md).
+The following table lists the available curated visualizations and information about them.
+>[!NOTE]
+> Another type of older visualization called *monitoring solutions* is no longer in active development. The replacement technology is the Azure Monitor Insights, as mentioned. We suggest you use the Insights and not deploy new instances of solutions. For more information on the solutions, see [Monitoring solutions in Azure Monitor](./insights/solutions.md).
-|Name with docs link| State | [Azure portal Link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description |
+|Name with docs link| State | [Azure portal link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description |
|:--|:--|:--|:--|
-| [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | GA (General availability) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, to troubleshoot sign-in failures, and to identify legacy authentications. |
+| [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | General availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. |
| [Azure Backup](../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. |
-| [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health |
+| [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. |
| [Azure Cosmos DB Insights](../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. |
-| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
-| [Azure Data Explorer insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
-| [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status|
- | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal using Azure Monitor Workbooks based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
+| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. |
+| [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. |
+| [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status.|
+ | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. |
| [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. |
- | [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible Application Performance Management (APM) service which monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to a variety of development tools and integrates with Visual Studio to support your DevOps processes. |
+ | [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. |
| [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). |
- | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
- | [Azure SQL insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you are just setting up SQL monitoring, use this instead of the SQL Analytics solution. |
+ | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. |
+ | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. |
| [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. |
- | [Azure Network Insights](../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resource. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resource that are hosting your website, by simply searching for your website name. |
- | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context as to the health and performance of the resource group as a whole. |
- | [Azure Monitor SAP](../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | GA | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlate the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability cluster, SAP HANA database, SAP NetWeaver, and so on, by adding the corresponding provider for that component. |
- | [Azure Stack HCI insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Azure Monitor Workbook based. Provides health, performance, and usage insights about registered Azure Stack HCI, version 21H2 clusters that are connected to Azure and are enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
- | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and monitors their processes and dependencies on other resources and external processes. |
+ | [Azure Network Insights](../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by simply searching for your website name. |
+ | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. |
+ | [Azure Monitor SAP](../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | GA | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. |
+ | [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. |
+ | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure VMs and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. |
| [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. | ## Product integrations
The other services and older monitoring solutions in the following table store t
| Product/Service | Description | |:|:|
-| [Azure Automation](../automation/index.yml) | Manage operating system updates and track changes on Windows and Linux computers. See [Change Tracking](../automation/change-tracking/overview.md) and [Update Management](../automation/update-management/overview.md). |
+| [Azure Automation](../automation/index.yml) | Manage operating system updates and track changes on Windows and Linux computers. See [Change tracking](../automation/change-tracking/overview.md) and [Update management](../automation/update-management/overview.md). |
| [Azure Information Protection](/azure/information-protection/) | Classify and optionally protect documents and emails. See [Central reporting for Azure Information Protection](/azure/information-protection/reports-aip#configure-a-log-analytics-workspace-for-the-reports). |
-| [Defender for the Cloud (was Azure Security Center)](../defender-for-cloud/defender-for-cloud-introduction.md) | Collect and analyze security events and perform threat analysis. See [Data collection in Defender for the Cloud](../defender-for-cloud/enable-data-collection.md) |
-| [Microsoft Sentinel](../sentinel/index.yml) | Connects to different sources including Office 365 and Amazon Web Services Cloud Trail. See [Connect data sources](../sentinel/connect-data-sources.md). |
+| [Defender for the Cloud (was Azure Security Center)](../defender-for-cloud/defender-for-cloud-introduction.md) | Collect and analyze security events and perform threat analysis. See [Data collection in Defender for the Cloud](../defender-for-cloud/enable-data-collection.md). |
+| [Microsoft Sentinel](../sentinel/index.yml) | Connect to different sources including Office 365 and Amazon Web Services Cloud Trail. See [Connect data sources](../sentinel/connect-data-sources.md). |
| [Microsoft Intune](/intune/) | Create a diagnostic setting to send logs to Azure Monitor. See [Send log data to storage, Event Hubs, or log analytics in Intune (preview)](/intune/fundamentals/review-logs-using-azure-monitor). |
-| Network [Traffic Analytics](../network-watcher/traffic-analytics.md) | Analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. |
-| [System Center Operations Manager](/system-center/scom) | Collect data from Operations Manager agents by connecting their management group to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md)<br> Assess the risk and health of your System Center Operations Manager management group with [Operations Manager Assessment](insights/scom-assessment.md) solution. |
+| Network [Traffic Analytics](../network-watcher/traffic-analytics.md) | Analyze Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. |
+| [System Center Operations Manager](/system-center/scom) | Collect data from Operations Manager agents by connecting their management group to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md).<br> Assess the risk and health of your System Center Operations Manager management group with the [Operations Manager Assessment](insights/scom-assessment.md) solution. |
| [Microsoft Teams Rooms](/microsoftteams/room-systems/azure-monitor-deploy) | Integrated, end-to-end management of Microsoft Teams Rooms devices. | | [Visual Studio App Center](/appcenter/) | Build, test, and distribute applications and then monitor their status and usage. See [Start analyzing your mobile app with App Center and Application Insights](app/mobile-center-quickstart.md). | | Windows | [Windows Update Compliance](/windows/deployment/update/update-compliance-get-started) - Assess your Windows desktop upgrades.<br>[Desktop Analytics](/configmgr/desktop-analytics/overview) - Integrates with Configuration Manager to provide insight and intelligence to make more informed decisions about the update readiness of your Windows clients. |
-| **The following solutions also integrate with parts of Azure Monitor. Note that solutions, are no longer under active development. Use [insights](#insights-and-curated-visualizations) instead.** | |
+| **The following solutions also integrate with parts of Azure Monitor. Note that solutions are no longer under active development. Use [Insights](#insights-and-curated-visualizations) instead.** | |
| Network - [Network Performance Monitor solution](insights/network-performance-monitor.md) |
-| Network - [Azure Application Gateway Solution](insights/azure-networking-analytics.md#azure-application-gateway-analytics) | .
+| Network - [Azure Application Gateway solution](insights/azure-networking-analytics.md#azure-application-gateway-analytics) | .
| [Office 365 solution](insights/solution-office-365.md) | Monitor your Office 365 environment. Updated version with improved onboarding available through Microsoft Sentinel. |
-| [SQL Analytics solution](insights/azure-sql.md) | Use SQL Insights instead |
+| [SQL Analytics solution](insights/azure-sql.md) | Use SQL Insights instead. |
| [Surface Hub solution](insights/surface-hubs.md) | | - ## Third-party integration | Integration | Description | |:|:|
-| [ITSM](alerts/itsmc-overview.md) | The IT Service Management Connector (ITSMC) allows you to connect Azure and a supported IT Service Management (ITSM) product/service. |
-| [Azure Monitor Partners](./partners.md) | A list of partners that integrate with Azure Monitor in some form |
-| [Azure Monitor Partner integrations](../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms if you've already built on them. Examples include Datadog and Elastic|
-
+| [ITSM](alerts/itsmc-overview.md) | The IT Service Management (ITSM) Connector allows you to connect Azure and a supported ITSM product/service. |
+| [Azure Monitor Partners](./partners.md) | A list of partners that integrate with Azure Monitor in some form. |
+| [Azure Monitor Partner integrations](../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms if you've already built on them. Examples include Datadog and Elastic.|
## Resources outside of Azure
-Azure Monitor can collect data from resources outside of Azure using the methods listed in the following table.
+Azure Monitor can collect data from resources outside of Azure by using the methods listed in the following table.
| Resource | Method | |:|:|
-| Applications | Monitor web applications outside of Azure using Application Insights. See [What is Application Insights?](./app/app-insights-overview.md). |
+| Applications | Monitor web applications outside of Azure by using Application Insights. See [What is Application Insights?](./app/app-insights-overview.md). |
| Virtual machines | Use agents to collect data from the guest operating system of virtual machines in other cloud environments or on-premises. See [Overview of Azure Monitor agents](agents/agents-overview.md). |
-| REST API Client | Separate APIs are available to write data to Azure Monitor Logs and Metrics from any REST API client. See [Send log data to Azure Monitor with the HTTP Data Collector API](logs/data-collector-api.md) for Logs and [Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md) for Metrics. |
-
+| REST API Client | Separate APIs are available to write data to Azure Monitor Logs and Metrics from any REST API client. See [Send log data to Azure Monitor with the HTTP Data Collector API](logs/data-collector-api.md) for Logs. See [Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md) for Metrics. |
## Azure supported services The following table lists Azure services and the data they collect into Azure Monitor. -- Metrics - The service automatically collects metrics into Azure Monitor Metrics.-- Logs - The service supports diagnostic settings which can send metrics and platform logs into Azure Monitor Logs for analysis in Log Analytics.-- Insight - There is an insight available which provides a customized monitoring experience for the service.
+- **Metrics**: The service automatically collects metrics into Azure Monitor Metrics.
+- **Logs**: The service supports diagnostic settings that can send metrics and platform logs into Azure Monitor Logs for analysis in Log Analytics.
+- **Insight**: An insight is available that provides a customized monitoring experience for the service.
-| Service | Resource Provider Namespace | Has Metrics | Has Logs | Insight | Notes
+| Service | Resource provider namespace | Has metrics | Has logs | Insight | Notes
|||-|--|-|--| | [Azure Active Directory Domain Services](../active-directory-domain-services/index.yml) | Microsoft.AAD/DomainServices | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftaaddomainservices) | | | | [Azure Active Directory](../active-directory/index.yml) | No | No | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | | | [Azure Analysis Services](../analysis-services/index.yml) | Microsoft.AnalysisServices/servers | [**Yes**](./essentials/metrics-supported.md#microsoftanalysisservicesservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftanalysisservicesservers) | | |
- | [API Management](../api-management/index.yml) | Microsoft.ApiManagement/service | [**Yes**](./essentials/metrics-supported.md#microsoftapimanagementservice) | [**Yes**](./essentials/resource-logs-categories.md#microsoftapimanagementservice) | | |
+ | [Azure API Management](../api-management/index.yml) | Microsoft.ApiManagement/service | [**Yes**](./essentials/metrics-supported.md#microsoftapimanagementservice) | [**Yes**](./essentials/resource-logs-categories.md#microsoftapimanagementservice) | | |
| [Azure App Configuration](../azure-app-configuration/index.yml) | Microsoft.AppConfiguration/configurationStores | [**Yes**](./essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappconfigurationconfigurationstores) | | | | [Azure Spring Apps](../spring-apps/overview.md) | Microsoft.AppPlatform/Spring | [**Yes**](./essentials/metrics-supported.md#microsoftappplatformspring) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappplatformspring) | | | | [Azure Attestation Service](../attestation/overview.md) | Microsoft.Attestation/attestationProviders | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftattestationattestationproviders) | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Bot Service](/azure/bot-service/) | Microsoft.BotService/botServices | [**Yes**](./essentials/metrics-supported.md#microsoftbotservicebotservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbotservicebotservices) | | | | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredis) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcacheredis) | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | | | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/redisEnterprise | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredisenterprise) | No | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | |
- | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdncdnwebapplicationfirewallpolicies) | | |
- | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofiles) | | |
- | [Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles/endpoints | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofilesendpoints) | | |
+ | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdncdnwebapplicationfirewallpolicies) | | |
+ | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofiles) | | |
+ | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles/endpoints | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofilesendpoints) | | |
| [Azure Virtual Machines - Classic](../virtual-machines/index.yml) | Microsoft.ClassicCompute/domainNames/slots/roles | [**Yes**](./essentials/metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | | | [Azure Virtual Machines - Classic](../virtual-machines/index.yml) | Microsoft.ClassicCompute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftclassiccomputevirtualmachines) | No | | |
- | [Virtual Network (Classic)](../virtual-network/network-security-groups-overview.md) | Microsoft.ClassicNetwork/networkSecurityGroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftclassicnetworknetworksecuritygroups) | | |
+ | [Azure Virtual Network (Classic)](../virtual-network/network-security-groups-overview.md) | Microsoft.ClassicNetwork/networkSecurityGroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftclassicnetworknetworksecuritygroups) | | |
| [Azure Storage (Classic)](../storage/index.yml) | Microsoft.ClassicStorage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccounts) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Blobs (Classic)](../storage/blobs/index.yml) | Microsoft.ClassicStorage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsblobservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Files (Classic)](../storage/files/index.yml) | Microsoft.ClassicStorage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Queues (Classic)](../storage/queues/index.yml) | Microsoft.ClassicStorage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Tables (Classic)](../storage/tables/index.yml) | Microsoft.ClassicStorage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Blob Storage (Classic)](../storage/blobs/index.yml) | Microsoft.ClassicStorage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsblobservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Files (Classic)](../storage/files/index.yml) | Microsoft.ClassicStorage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Queue Storage (Classic)](../storage/queues/index.yml) | Microsoft.ClassicStorage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Table Storage (Classic)](../storage/tables/index.yml) | Microsoft.ClassicStorage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
| Microsoft Cloud Test Platform | Microsoft.Cloudtest/hostedpools | [**Yes**](./essentials/metrics-supported.md#microsoftcloudtesthostedpools) | No | | | | Microsoft Cloud Test Platform | Microsoft.Cloudtest/pools | [**Yes**](./essentials/metrics-supported.md#microsoftcloudtestpools) | No | | | | [Cray ClusterStor in Azure](https://azure.microsoft.com/blog/supercomputing-in-the-cloud-announcing-three-new-cray-in-azure-offers/) | Microsoft.ClusterStor/nodes | [**Yes**](./essentials/metrics-supported.md#microsoftclusterstornodes) | No | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md#microsoftcommunicationcommunicationservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcommunicationcommunicationservices) | | | | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservices) | No | | Agent required to monitor guest operating system and workflows.| | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices/roles | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | No | | Agent required to monitor guest operating system and workflows.|
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md#microsoftcomputedisks) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | |
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
- | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
+ | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md#microsoftcomputedisks) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | |
+ | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
+ | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
+ | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.|
| [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftconnectedvehicleplatformaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftconnectedvehicleplatformaccounts) | | | | [Azure Container Instances](../container-instances/index.yml) | Microsoft.ContainerInstance/containerGroups | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | No | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | | | [Azure Container Registry](../container-registry/index.yml) | Microsoft.ContainerRegistry/registries | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerregistryregistries) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerregistryregistries) | | |
- | [Azure Kubernetes Service (AKS)](../aks/index.yml) | Microsoft.ContainerService/managedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerservicemanagedclusters) | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
+ | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.ContainerService/managedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerservicemanagedclusters) | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | |
| [Azure Custom Providers](../azure-resource-manager/custom-providers/index.yml) | Microsoft.CustomProviders/resourceProviders | [**Yes**](./essentials/metrics-supported.md#microsoftcustomprovidersresourceproviders) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcustomprovidersresourceproviders) | | | | [Microsoft Dynamics 365 Customer Insights](/dynamics365/customer-insights/) | Microsoft.D365CustomerInsights/instances | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftd365customerinsightsinstances) | | | | [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md) | Microsoft.DataBoxEdge/DataBoxEdgeDevices | [**Yes**](./essentials/metrics-supported.md#microsoftdataboxedgedataboxedgedevices) | No | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure HDInsight](../hdinsight/index.yml) | Microsoft.HDInsight/clusters | [**Yes**](./essentials/metrics-supported.md#microsofthdinsightclusters) | No | [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | | | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/services | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisservices) | [**Yes**](./essentials/resource-logs-categories.md#microsofthealthcareapisservices) | | | | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/workspaces/iotconnectors | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors) | No | | |
- | [StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworknetworkfunctions) | No | | |
- | [StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworkvirtualnetworkfunctions) | No | | |
+ | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworknetworkfunctions) | No | | |
+ | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md#microsofthybridnetworkvirtualnetworkfunctions) | No | | |
| [Azure Monitor](./index.yml) | microsoft.insights/autoscalesettings | [**Yes**](./essentials/metrics-supported.md#microsoftinsightsautoscalesettings) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings) | | | | [Azure Monitor](./index.yml) | microsoft.insights/components | [**Yes**](./essentials/metrics-supported.md#microsoftinsightscomponents) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightscomponents) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure IoT Central](../iot-central/index.yml) | Microsoft.IoTCentral/IoTApps | [**Yes**](./essentials/metrics-supported.md#microsoftiotcentraliotapps) | No | | | | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/managedHSMs | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultmanagedhsms) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | | | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | |
- | [Azure Kubernetes Service (AKS)](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftkubernetesconnectedclusters) | No | | |
+ | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftkubernetesconnectedclusters) | No | | |
| [Azure Data Explorer](/azure/data-explorer/) | Microsoft.Kusto/clusters | [**Yes**](./essentials/metrics-supported.md#microsoftkustoclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkustoclusters) | | | | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationAccounts | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicintegrationaccounts) | | | | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationServiceEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) | No | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/spatialAnchorsAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityspatialanchorsaccounts) | No | | | | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) | No | | | | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools/volumes | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) | No | | |
- | [Application Gateway](../application-gateway/index.yml) | Microsoft.Network/applicationGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkapplicationgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways) | | |
+ | [Azure Application Gateway](../application-gateway/index.yml) | Microsoft.Network/applicationGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkapplicationgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways) | | |
| [Azure Firewall](../firewall/index.yml) | Microsoft.Network/azureFirewalls | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkazurefirewalls) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkazurefirewalls) | | | | [Azure Bastion](../bastion/index.yml) | Microsoft.Network/bastionHosts | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkbastionhosts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkbastionhosts) | | |
- | [VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/connections | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkconnections) | No | | |
+ | [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/connections | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkconnections) | No | | |
| [Azure DNS](../dns/index.yml) | Microsoft.Network/dnszones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkdnszones) | No | | | | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteCircuits | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) | | | | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) | No | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md#microsoftresourcessubscriptions) | No | | | | [Azure Cognitive Search](../search/index.yml) | Microsoft.Search/searchServices | [**Yes**](./essentials/metrics-supported.md#microsoftsearchsearchservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsearchsearchservices) | | | | [Azure Service Bus](/azure/service-bus/) | Microsoft.ServiceBus/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftservicebusnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftservicebusnamespaces) | [Azure Service Bus](/azure/service-bus/) | |
- | [Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.|
+ | [Azure Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.|
| [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/SignalR | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicesignalr) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicesignalr) | | | | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/WebPubSub | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicewebpubsub) | | | | [Azure SQL Managed Instance](/azure/azure-sql/database/monitoring-tuning-index) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsqlmanagedinstances) | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | | | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/databases | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserversdatabases) | No | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | | | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/elasticpools | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserverselasticpools) | No | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | | | [Azure Storage](../storage/index.yml) | Microsoft.Storage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccounts) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Blobs](../storage/blobs/index.yml) | Microsoft.Storage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsblobservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Files](../storage/files/index.yml) | Microsoft.Storage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsfileservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Storage Queue Services](../storage/queues/index.yml) | Microsoft.Storage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsqueueservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
- | [Azure Table Services](../storage/tables/index.yml) | Microsoft.Storage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountstableservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Blob Storage](../storage/blobs/index.yml) | Microsoft.Storage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsblobservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Files](../storage/files/index.yml) | Microsoft.Storage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsfileservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Queue Storage](../storage/queues/index.yml) | Microsoft.Storage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsqueueservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
+ | [Azure Table Storage](../storage/tables/index.yml) | Microsoft.Storage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountstableservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | |
| [Azure HPC Cache](../hpc-cache/index.yml) | Microsoft.StorageCache/caches | [**Yes**](./essentials/metrics-supported.md#microsoftstoragecachecaches) | No | | | | [Azure Storage](../storage/index.yml) | Microsoft.StorageSync/storageSyncServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragesyncstoragesyncservices) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | | [Azure Stream Analytics](../stream-analytics/index.yml) | Microsoft.StreamAnalytics/streamingjobs | [**Yes**](./essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstreamanalyticsstreamingjobs) | | |
The following table lists Azure services and the data they collect into Azure Mo
| [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites/slots | [**Yes**](./essentials/metrics-supported.md#microsoftwebsitesslots) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebsitesslots) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/staticSites | [**Yes**](./essentials/metrics-supported.md#microsoftwebstaticsites) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - ## Next steps -- Read more about the [Azure Monitor data platform which stores the logs and metrics collected by insights and solutions](data-platform.md).
+- Read more about the [Azure Monitor data platform that stores the logs and metrics collected by insights and solutions](data-platform.md).
- Complete a [tutorial on monitoring an Azure resource](essentials/tutorial-resource-logs.md). - Complete a [tutorial on writing a log query to analyze data in Azure Monitor Logs](essentials/tutorial-resource-logs.md). - Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics.md).
cognitive-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concepts-exploration.md
Title: Exploration - Personalizer
-description: With exploration, Personalizer is able to continue delivering good results, even as user behavior changes. Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.
+description: With exploration, Personalizer is able to continuously deliver good results, even as user behavior changes. Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.
ms. Previously updated : 10/23/2019 Last updated : 08/28/2022
-# Exploration and exploitation
+# Exploration and Known
-With exploration, Personalizer is able to continue delivering good results, even as user behavior changes.
+With exploration, Personalizer is able to continuously deliver good results, even as user behavior changes.
When Personalizer receives a Rank call, it returns a RewardActionID that either:
-* Uses exploitation to match the most probable user behavior based on the current machine learning model.
+* Uses known relevance to match the most probable user behavior based on the current machine learning model.
* Uses exploration, which does not match the action that has the highest probability in the rank. Personalizer currently uses an algorithm called *epsilon greedy* to explore.
Personalizer currently uses an algorithm called *epsilon greedy* to explore.
You configure the percentage of traffic to use for exploration in the Azure portal's **Configuration** page for Personalizer. This setting determines the percentage of Rank calls that perform exploration.
-Personalizer determines whether to explore or exploit with this probability on each rank call. This is different than the behavior in some A/B frameworks that lock a treatment on specific user IDs.
+Personalizer determines whether to explore or use the model's learned best action with this probability on each rank call. This is different than the behavior in some A/B frameworks that lock a treatment on specific user IDs.
## Best practices for choosing an exploration setting
A setting of zero will negate many of the benefits of Personalizer. With this se
A setting that is too high will negate the benefits of learning from user behavior. Setting it to 100% implies a constant randomization, and any learned behavior from users would not influence the outcome.
-It is important not to change the application behavior based on whether you see if Personalizer is exploring or exploiting. This would lead to learning biases that ultimately would decrease the potential performance.
+It is important not to change the application behavior based on whether you see if Personalizer is exploring or using the learned best action. This would lead to learning biases that ultimately would decrease the potential performance.
## Next steps
-[Reinforcement learning](concepts-reinforcement-learning.md)
+[Reinforcement learning](concepts-reinforcement-learning.md)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Run-time control APIs can be used to manage recording via internal business logi
## Event Grid notifications > [!NOTE]
-> Azure Communication Services provides short term media storage for recordings. **Export any recorded content you wish to preserve within 48 hours.** After 48 hours, recordings will no longer be available.
+> Azure Communication Services provides short term media storage for recordings. **Recordings will be available to download for 48 hours.** After 48 hours, recordings will no longer be available.
An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` is published when a recording is ready for retrieval, typically a few minutes after the recording process has completed (e.g. meeting ended, recording stopped). Recording event notifications include `contentLocation` and `metadataLocation`, which are used to retrieve both recorded media and a recording metadata file.
cost-management-billing Quick Create Budget Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-bicep.md
Previously updated : 07/06/2022 Last updated : 08/26/2022
One Azure resource is defined in the Bicep file:
```azurecli myContactEmails ='("user1@contoso.com", "user2@contoso.com")'
- myContactGroups ='("action-group-resource-id-01", "action-group-resource-id-02")'
+ myContactGroups ='("/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/microsoft.insights/actionGroups/groupone", "/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/microsoft.insights/actionGroups/grouptwo")'
myRgFilterValues ='("resource-group-01", "resource-group-02")' myMeterCategoryFilterValues ='("meter-category-01", "meter-category-02")'
One Azure resource is defined in the Bicep file:
```azurepowershell $myContactEmails = @("user1@contoso.com", "user2@contoso.com")
- $myContactGroups = @("action-group-resource-id-01", "action-group-resource-id-02")
+ $myContactGroups = @("/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/microsoft.insights/actionGroups/groupone", "/subscriptions/{sub-id}/resourceGroups/{rg-name}/providers/microsoft.insights/actionGroups/grouptwo")
$myRgFilterValues = @("resource-group-01", "resource-group-02") $myMeterCategoryFilterValues = @("meter-category-01", "meter-category-02")
One Azure resource is defined in the Bicep file:
- **startDate**: Replace **\<start-date\>** with the start date. It must be the first of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months in the future. A past start date should be selected within the timegrain period. - **endDate**: Replace **\<end-date\>** with the end date in YYYY-MM-DD format. If not provided, it defaults to ten years from the start date. - **contactEmails**: First create a variable that holds your emails and then pass that variable. Replace the sample emails with the email addresses to send the budget notification to when the threshold is exceeded.
- - **contactGroups**: First create a variable that holds your contact groups and then pass that variable. Replace the sample contact groups with the list of action groups to send the budget notification to when the threshold is exceeded.
+ - **contactGroups**: First create a variable that holds your contact groups and then pass that variable. Replace the sample contact groups with the list of action groups to send the budget notification to when the threshold is exceeded. You must pass the resource ID of the action group, which you can get with [az monitor action-group show](/cli/azure/monitor/action-group#az-monitor-action-group-show) or [Get-AzActionGroup](/powershell/module/az.monitor/get-azactiongroup).
- **resourceGroupFilterValues**: First create a variable that holds your resource group filter values and then pass that variable. Replace the sample filter values with the set of values for your resource group filter. - **meterCategoryFilterValues**: First create a variable that holds your meter category filter values and then pass that variable. Replace the sample filter values within parentheses with the set of values for your meter category filter.
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 06/12/2022 Last updated : 08/26/2022 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in Azure Data Factory and Synapse Analytics
If you are using old default parameterization template, new way to include globa
Default parameterization template should include all values from global parameter list. #### Resolution
-Use updated [default parameterization template.](https://docs.microsoft.com/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list.
+Use updated [default parameterization template.](https://docs.microsoft.com/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters#default-parameterization-template) as one time migration to new method of including global parameters. This template references to all values in global parameter list. You also have to update the deployment task in the **release pipeline** if you are already overriding the template parameters there.
### Error code: InvalidTemplate
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
Title: Container security architecture in Microsoft Defender for Cloud description: Learn about the architecture of Microsoft Defender for Containers for each container platform++ Last updated 06/19/2022
These components are required in order to receive the full protection offered by
- **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** ΓÇô [GCP Cloud Logging](https://cloud.google.com/logging/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis. -- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).
+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your GKE clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).
- **The Defender extension** ΓÇô The [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) that collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The extension is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace.
defender-for-cloud Defender For Containers Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-usage.md
Some images may reuse tags from an image that was already scanned. For example,
## Next steps
-Learn more about the [advanced protection plans of Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
+Learn more about the [advanced protection plans of Microsoft Defender for Cloud](enhanced-security-features-overview.md).
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
description: This article lists Microsoft Defender for Cloud's security recommen
Previously updated : 05/19/2022 Last updated : 08/24/2022
expressroute Expressroute Howto Coexist Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-coexist-resource-manager.md
The steps to configure both scenarios are covered in this article. This article
## Limits and limitations * **Only route-based VPN gateway is supported.** You must use a route-based [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). You also can use a route-based VPN gateway with a VPN connection configured for 'policy-based traffic selectors' as described in [Connect to multiple policy-based VPN devices](../vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md). * **ExpressRoute-VPN Gateway coexist configurations are not supported on the Basic SKU**.
-* **If you want to use transit routing between ExpressRoute and VPN, the ASN of Azure VPN Gateway must be set to 65515.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect.
+* **If you want to use transit routing between ExpressRoute and VPN, the ASN of Azure VPN Gateway must be set to 65515 and Azure Route Server should be used.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect.
* **The gateway subnet must be /27 or a shorter prefix**, (such as /26, /25), or you will receive an error message when you add the ExpressRoute virtual network gateway. * **Coexistence in a dual-stack vnet is not supported.** If you are using ExpressRoute IPv6 support and a dual-stack ExpressRoute gateway, coexistence with VPN Gateway will not be possible.
hdinsight Hdinsight Troubleshoot Cluster Creation Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-cluster-creation-fails.md
description: Learn how to troubleshoot Apache cluster creation issues for Azure
Previously updated : 04/14/2020 Last updated : 08/28/2022 #Customer intent: As an HDInsight user, I would like to understand how to resolve common cluster creation failures.
hdinsight Hdinsight Use Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-use-sqoop.md
Title: Run Apache Sqoop jobs with Azure HDInsight (Apache Hadoop)
description: Learn how to use Azure PowerShell from a workstation to run Sqoop import and export between a Hadoop cluster and an Azure SQL database. Previously updated : 12/06/2019 Last updated : 08/28/2022 # Use Apache Sqoop with Hadoop in HDInsight
hdinsight Hbase Troubleshoot Hbase Hbck Inconsistencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-hbase-hbck-inconsistencies.md
Title: hbase hbck returns inconsistencies in Azure HDInsight
description: hbase hbck returns inconsistencies in Azure HDInsight Previously updated : 08/08/2019 Last updated : 08/28/2022 # Scenario: `hbase hbck` command returns inconsistencies in Azure HDInsight
If you didn't see your problem or are unable to solve your issue, visit one of t
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
-* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Hdinsight Hadoop Oms Log Analytics Use Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-use-queries.md
description: Learn how to run queries on Azure Monitor logs to monitor jobs runn
Previously updated : 12/02/2019 Last updated : 08/28/2022 # Query Azure Monitor logs to monitor HDInsight clusters
hdinsight Apache Spark Ipython Notebook Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-ipython-notebook-machine-learning.md
description: Tutorial - Step-by-step instructions on how to build Apache Spark m
Previously updated : 04/07/2020 Last updated : 08/28/2022 # Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to create a simple machine learning Spark application.
hdinsight Migrate Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/migrate-versions.md
Title: Migrate Apache Spark 2.1 or 2.2 workloads to 2.3 or 2.4 - Azure HDInsight
description: Learn how to migrate Apache Spark 2.1 and 2.2 to 2.3 or 2.4. Previously updated : 05/20/2020 Last updated : 08/28/2022 # Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4
hdinsight Optimize Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/optimize-data-storage.md
Title: Optimize data storage for Apache Spark - Azure HDInsight
description: Learn how to optimize data storage for use with Apache Spark on Azure HDInsight. Previously updated : 05/20/2020 Last updated : 08/28/2022 # Data storage optimization for Apache Spark
hdinsight Optimize Memory Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/optimize-memory-usage.md
Title: Optimize memory usage in Apache Spark - Azure HDInsight
description: Learn how to optimize memory usage in Apache Spark on Azure HDInsight. Previously updated : 05/20/2020 Last updated : 08/28/2022 # Memory usage optimization for Apache Spark
iot-dps Concepts X509 Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-x509-attestation.md
Leaf certificates used with [Individual enrollment](./concepts-service.md#indivi
For enrollment groups, the subject common name (CN) also sets the device ID that is registered with IoT Hub. The device ID will be shown in the **Registration Records** for the authenticated device in the enrollment group. For individual enrollments, the device ID can be set in the enrollment entry. If it's not set in the enrollment entry, then the subject common name (CN) is used.
-To learn more, see [Authenticating devices signed with X.509 CA certificates](../iot-hub/iot-hub-x509ca-overview.md#authenticating-devices-signed-with-x509-ca-certificates).
+To learn more, see [Authenticate devices signed with X.509 CA certificates](../iot-hub/iot-hub-x509ca-overview.md#authenticate-devices-signed-with-x509-ca-certificates).
## Controlling device access to the provisioning service with X.509 certificates
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you'll use OpenSSL to create a self-signed X.509 certificate an
> Use certificates created with OpenSSL in this quickstart for development testing only. > Do not use these certificates in production. > These certificates expire after 30 days and may contain hard-coded passwords, such as *1234*.
-> To learn about obtaining certificates suitable for use in production, see [How to get an X.509 CA certificate](../iot-hub/iot-hub-x509ca-overview.md#how-to-get-an-x509-ca-certificate) in the Azure IoT Hub documentation.
+> To learn about obtaining certificates suitable for use in production, see [How to get an X.509 CA certificate](../iot-hub/iot-hub-x509ca-overview.md#get-an-x509-ca-certificate) in the Azure IoT Hub documentation.
> Perform the steps in this section in your Git Bash prompt.
This article demonstrates an individual enrollment for a single device to be pro
* Select an IoT hub linked with your provisioning service. * Update the **Initial device twin state** with the desired initial configuration for the device.
- :::image type="content" source="./media/quick-create-simulated-device-x509/add-individual-enrollment-with-cert.png" alt-text="Screenshot that shows adding an individual enrollment with X.509 attestation to D P S in Azure portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/add-individual-enrollment-with-cert.png" alt-text="Screenshot that shows adding an individual enrollment with X.509 attestation to DPS in Azure portal.":::
7. Select **Save**. You'll be returned to **Manage enrollments**.
In this section, you update the sample code with your Device Provisioning Servic
1. Copy the **ID Scope** value.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the I D scope on Azure portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
1. Launch Visual Studio and open the new solution file that was created in the `cmake` directory you created in the root of the azure-iot-sdk-c git repository. The solution file is named `azure_iot_sdks.sln`.
In this section, you'll use your Windows command prompt.
2. Copy the **ID Scope** value.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the I D scope on Azure portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
-3. In your Windows command prompt, change to the X509Sample directory. This is located in the *.\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample* directory off the directory where you cloned the samples on your computer.
+3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-samples-csharp\provisioning\Samples\device\X509Sample* directory off the directory where you cloned the samples on your computer.
4. Enter the following command to build and run the X.509 device provisioning sample (replace the `<IDScope>` value with the ID Scope that you copied in the previous section. The certificate file will default to *./certificate.pfx* and prompt for the .pfx password.
In this section, you'll use your Windows command prompt.
1. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the ID scope and global device endpoint on Azure portal.":::
1. In your Windows command prompt, go to the sample directory, and install the packages needed by the sample. The path shown is relative to the location where you cloned the SDK.
In this section, you'll use your Windows command prompt.
1. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the ID scope and global device endpoint on Azure portal.":::
1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
In this section, you'll use both your Windows command prompt and your Git Bash p
1. Copy the **ID Scope** and **Global device endpoint** values.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the ID scope and global device endpoint on Azure portal.":::
1. In your Windows command prompt, navigate to the sample project folder. The path shown is relative to the location where you cloned the SDK
iot-hub-device-update Components Enumerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/components-enumerator.md
Title: 'Register components with Device Update: Contoso Virtual Vacuum component
description: Follow a Contoso Virtual Vacuum example to implement your own component enumerator by using proxy update. Previously updated : 12/3/2021 Last updated : 08/25/2022
-# Register components with Device Update: Contoso Virtual Vacuum component enumerator
+# Register components with Device Update
-This article shows an example implementation of the Contoso Virtual Vacuum component enumerator. You can reference this example to implement a custom component enumerator for your Internet of Things (IoT) devices. A *component* is an identity beneath the device level that has a composition relationship with the host device.
+This article shows an example implementation of a Device Update for IoT Hub component enumerator. You can reference this example to implement a custom component enumerator for your IoT devices. A *component* is an identity beneath the device level that has a composition relationship with the host device.
-## What is Contoso Virtual Vacuum?
+This article demonstrates a component enumerator using a virtual IoT device called Contoso Virtual Vacuum. Component enumerators are used to implement the *proxy update* feature.
-Contoso Virtual Vacuum is a virtual IoT device that we use to demonstrate the *proxy update* feature.
-
-Proxy update enables updating multiple components on the same IoT device or multiple sensors connected to the IoT device with a single over-the-air deployment. Proxy update supports an installation order for updating components. It also supports multiple-step updating with pre-installation, installation, and post-installation capabilities.
+Proxy update enables updating multiple components on the same IoT device or multiple sensors connected to the IoT device with a single over-the-air deployment. Proxy update supports an installation order for updating components. It also supports multiple-step updating with pre-installation, installation, and post-installation capabilities.
Use cases where proxy updates are applicable include:
Use cases where proxy updates are applicable include:
- Targeting specific update files to apps or components on the device. - Targeting specific update files to sensors connected to IoT devices over a network protocol (for example, USB or CAN bus).
-The Device Update Agent runs on the host device. It can send each update to a specific component or to a group of components of the same hardware class (that is, requiring the same software or firmware update).
+For more information, see [Proxy updates and multi-component updating](device-update-proxy-updates.md).
+
+The Device Update agent runs on the host device. It can send each update to a specific component or to a group of components of the same hardware class (that is, requiring the same software or firmware update).
+
+## What is a component enumerator?
+
+A component enumerator is an extension for the Device Update agent that provides information about every component that you need for an over-the-air update via a host device's Azure IoT Hub connection.
+
+The Device Update agent is device and component agnostic. By itself, the agent doesn't know anything about components on (or connected to) a host device at the time of the update.
+
+To enable proxy updates, device builders must identify all the components on the device that can be updated and assign a unique name to each component. Also, a group name can be assigned to components of the same hardware class, so that the same update can be installed onto all components in the same group. Then, the update content handler can install and apply the update to the correct components.
++
+Here are the responsibilities of each part of the proxy update flow:
+
+- **Device builder**
+
+ - Design and build the device.
+
+ - Integrate the Device Update agent and its dependencies.
+
+ - Implement a device-specific component enumerator extension and register with the Device Update agent.
+
+ The component enumerator uses the information from a component inventory or a configuration file to augment static component data (Device Update required) with dynamic data (for example, firmware version, connection status, and hardware identity).
+
+ - Create a proxy update that contains one or more child updates that target one or more components on (or connected to) the device.
+
+ - Send the update to the solution operator.
+
+- **Solution operator**
+
+ - Import the update and manifest to the Device Update service.
+
+ - Deploy the update to a group of devices.
+
+- **Device Update agent**
+
+ - Get update information from IoT Hub via the device twin or module twin.
+
+ - Invoke a *steps handler* to process the proxy update intended for one or more components on the device.
+
+ The example in this article has two updates: `host-fw-1.1` and `motors-fw-1.1`. For each child update, the parent steps handler invokes a child steps handler to enumerate all components that match the `Compatibilities` properties specified in the child update's manifest file. Next, the handler downloads, installs, and applies the child update to all targeted components.
+
+ To get the matching components, the child update calls a `SelectComponents` API provided by the component enumerator. If there are no matching components, the child update is skipped.
+
+ - Collect all update results from parent and child updates, and report those results to IoT Hub.
+
+- **Child steps handler**
+
+ - Iterate through a list of component instances that are compatible with the child update content. For more information, see [Steps handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler).
+
+In production, device builders can use [existing handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or implement a custom handler that invokes any installer needed for an over-the-air update. For more information, see [Implement a custom update content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md).
## Virtual Vacuum components
-For this demonstration, Contoso Virtual Vacuum consists of five logical components:
+For this article, we use a virtual IoT device to demonstrate the key concepts and features. The Contoso Virtual Vacuum device consists of five logical components:
- Host firmware - Host boot file system
For this demonstration, Contoso Virtual Vacuum consists of five logical componen
:::image type="content" source="media/understand-device-update/contoso-virtual-vacuum-components-diagram.svg" alt-text="Diagram that shows the Contoso Virtual Vacuum components." lightbox="media/understand-device-update/contoso-virtual-vacuum-components-diagram.svg":::
-We used the following directory structure to simulate the components:
+The following directory structure simulates the components:
```sh /usr/local/contoso-devices/vacuum-1/hostfw
We used the following directory structure to simulate the components:
Each component's directory contains a JSON file that stores a mock software version number of each component. Example JSON files are *firmware.json* and *diskimage.json*.
-> [!NOTE]
-> For this demo, to update the components' firmware, we'll copy *firmware.json* or *diskimage.json* (update payload) to the targeted components' directory.
+For this demo, to update the components' firmware, we'll copy *firmware.json* or *diskimage.json* (update payload) to the targeted components' directory.
Here's an example *firmware.json* file:
Here's an example *firmware.json* file:
``` > [!NOTE]
-> Contoso Virtual Vacuum contains software or firmware versions for the purpose of demonstrating proxy update. It doesn't provide any other functionality.
-
-## What is a component enumerator?
-
-A component enumerator is a Device Update Agent extension that provides information about every component that you need for an over-the-air update via a host device's Azure IoT Hub connection.
-
-The Device Update Agent is device and component agnostic. By itself, the agent doesn't know anything about components on (or connected to) a host device at the time of the update.
-
-To enable proxy updates, device builders must identify all updateable components on the device and assign a unique name to each component. Also, a group name can be assigned to components of the same hardware class, so that the same update can be installed onto all components in the same group. The update content handler can then install and apply the update to the correct components.
--
-Here are the responsibilities of each part of the proxy update flow:
--- **Device builder**
- - Design and build the device.
- - Integrate the Device Update Agent and its dependencies.
- - Implement a device-specific component enumerator extension and register with the Device Update Agent.
-
- The component enumerator uses the information from a component inventory or a configuration file to augment static component data (Device Update required) with dynamic data (for example, firmware version, connection status, and hardware identity).
- - Create a proxy update that contains one or more child updates that target one or more components on (or connected to) the device.
- - Send the update to the solution operator.
-- **Solution operator**
- - Import the update (and manifest) to the Device Update service.
- - Deploy the update to a group of devices.
-- **Device Update Agent**
- - Get update information from Azure IoT Hub (via device twin or module twin).
- - Invoke a *steps handler* to process the proxy update intended for one or more components on the device.
-
- This example has two updates: `host-fw-1.1` and `motors-fw-1.1`. For each child update, the parent steps handler invokes a child steps handler to enumerate all components that match the `Compatibilities` properties specified in the child update's manifest file. Next, the handler downloads, installs, and applies the child update to all targeted components.
-
- To get the matching components, the child update calls a `SelectComponents` API provided by the component enumerator. If there are no matching components, the child update is skipped.
- - Collect all update results from parent and child updates, and report those results to Azure IoT Hub.
-- **Child steps handler**
- - Iterate through a list of component instances that are compatible with the child update content. For more information, see [Steps handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler).
--
-In production, device builders can use [existing handlers](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) or implement a custom handler that invokes any installer needed for an over-the-air update. For more information, see [Implement a custom update content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md).
+> Contoso Virtual Vacuum contains software or firmware versions for the purpose of demonstrating proxy update. It doesn't provide any other functionality.
-## Implement a component enumerator for the Device Update Agent (C language)
+## Implement a component enumerator (C language)
### Requirements
The `ComponentInfo` JSON string must include the following properties:
| Name | Type | Description | |||| |`id`| string | A component's unique identity (device scope). Examples include hardware serial number, disk partition ID, and unique file path of the component.|
-|`name`| string| A component's logical name. This is the name that a device builder assigns to a component that's available in every device of the same `device` class.<br/><br/>For example, every Contoso Virtual Vacuum device contains a motor that drives a left wheel. Contoso assigned *left motor* as a common (logical) name for this motor to easily refer to this component, instead of hardware ID, which can be globally unique.|
+|`name`| string| A component's logical name. This property is the name that a device builder assigns to a component that's available in every device of the same `device` class.<br/><br/>For example, every Contoso Virtual Vacuum device contains a motor that drives a left wheel. Contoso assigned *left motor* as a common (logical) name for this motor to easily refer to this component, instead of hardware ID, which can be globally unique.|
|`group`|string|A group that this component belongs to.<br/><br/>For example, all motors could belong to a *motors* group.|
-|`manufacturer`|string|For a physical hardware component, this is a manufacturer or vendor name.<br/><br/>For a logical component, such as a disk partition or directory, it can be any device builder's defined value.|
-|`model`|string|For a physical hardware component, this is a model name.<br/><br/>For a logical component, such as a disk partition or directory, this can be any device builder's defined value.|
+|`manufacturer`|string|For a physical hardware component, this property is a manufacturer or vendor name.<br/><br/>For a logical component, such as a disk partition or directory, it can be any device builder's defined value.|
+|`model`|string|For a physical hardware component, this property is a model name.<br/><br/>For a logical component, such as a disk partition or directory, this property can be any device builder's defined value.|
|`properties`|object| A JSON object that contains any optional device-specific properties.|
-Here's an example of `ComponentInfo` code:
+Here's an example of `ComponentInfo` code based on the Contoso Virtual Vacuum components:
```json {
Here's an example of `ComponentInfo` code:
### Example return values
-Following is a JSON document returned from the `GetAllComponents` function. It's based on the example implementation of the Contoso component enumerator.
+Following is a JSON document returned from the `GetAllComponents` function. It's based on the example implementation of the Contoso Virtual Vacuum component enumerator.
```json {
Here's the parameter's output for the *hostfw* component:
## Inventory file
-The example implementation shown earlier for the Contoso component enumerator will read the device-specific components' information from the *component-inventory.json* file. Note that this example implementation is only for demonstration purposes.
+The example implementation shown earlier for the Contoso Virtual Vacuum component enumerator will read the device-specific components' information from the *component-inventory.json* file. This example implementation is only for demonstration purposes.
-In a production scenario, some properties should be retrieved directly from the actual components. These properties include `id`, `manufacturer`, and `model`.
+In a production scenario, some properties should be retrieved directly from the actual components. These properties include `id`, `manufacturer`, and `model`.
The device builder defines the `name` and `group` properties. These values should never change after they're defined. The `name` property must be unique within the device.
-#### Example component-inventory.json file
+### Example component-inventory.json file
> [!NOTE] > The content in this file looks almost the same as the returned value from the `GetAllComponents` function. However, `ComponentInfo` in this file doesn't contain `version` and `status` properties. The component enumerator will populate these properties at runtime.
For example, for *hostfw*, the value of the property `properties.version` will b
## Next steps
-This example is written in C++. You can choose to use C if you prefer. To explore example source codes, see:
+The example in this article used C. To explore C++ example source codes, see:
- [CMakeLists.txt](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/CMakeLists.txt) - [contoso-component-enumerator.cpp](https://github.com/Azure/iot-hub-device-update/blob/main/src/extensions/component-enumerators/examples/contoso-component-enumerator/contoso-component-enumerator.cpp)
iot-hub-device-update Device Update Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-diagnostics.md
# Device Update for IoT Hub diagnostics overview
-Device Update for IoT Hub has several features focused on making it easier to diagnose and troubleshoot device-side errors. With the release of the v0.8.0 agent, there are two diagnostic features available:
+Device Update for IoT Hub has several features that help you to diagnose and troubleshoot device-side errors. With the release of the v0.8.0 agent, there are two diagnostic features available:
-* **Deployment error codes** can now be viewed directly in the latest preview version of the Device Update user interface
+* **Deployment error codes** can be viewed directly in the latest preview version of the Device Update user interface
* **Remote log collection** enables the creation of log operations, which instruct targeted devices to upload on-device diagnostic logs to a linked Azure Blob storage account ## Deployment error codes in UI
-When a device reports a deployment failure to the Device Update service, the Device Update user interface now displays the device's reported resultCode and extendedResultCode in the user interface. You can view these codes using the following steps:
+When a device reports a deployment failure to the Device Update service, the Device Update user interface displays the device's reported `resultCode` and `extendedResultCode` in the user interface. Use the following steps to view these codes:
-1. Navigate to the **Groups and Deployments** tab.
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
-2. Click on the name of a group with an active deployment to get to the **Group details** page.
+1. Select **Updates** and then navigate to the **Groups and Deployments** tab.
-3. Click on any device name in the **Device list** to open the device details panel. Here you can see the Result code the device has reported.
+1. Select the name of a group with an active deployment to get to the **Group details** page.
-4. The DU reference agent follows standard HTTP status code convention for the Result code field (e.g. "200" indicates success). For more information on how to parse Result codes, see the [Device Update client error codes](device-update-error-codes.md) page.
+1. Select any device name in the **Device list** to open the device details panel. Here you can see the result code that the device has reported.
+
+1. The Device Update reference agent follows standard HTTP status code convention for the result code field (for example, "200" indicates success). For more information on how to parse result codes, see [Device Update client error codes](device-update-error-codes.md).
> [!NOTE]
- > If you have modified your DU Agent to report customized Result codes, the numerical codes will still be passed through to the Device Update user interface. You may then refer to any documentation you have created to parse these numerical codes.
+ > If you have modified your Device Update agent to report customized result codes, the numerical codes will still be passed through to the Device Update user interface. You may then refer to any documentation you have created to parse these numerical codes.
## Remote log collection
-When more information from the device is necessary to diagnose and troubleshoot an error, you can use the log collection feature to instruct targeted devices to upload on-device diagnostic logs to a linked Azure Blob storage account. You can start using this feature by following these [instructions](device-update-log-collection.md).
+When more information from the device is necessary to diagnose and troubleshoot an error, you can use the log collection feature to instruct targeted devices to upload on-device diagnostic logs to a linked Azure Blob storage account. You can start using this feature by following the instructions in [Remotely collect diagnostic logs from devices](device-update-log-collection.md).
Device Update's remote log collection is a service-driven, operation-based feature. To take advantage of log collection, a device need only be able to implement the Diagnostics interface and configuration file, and be able to upload files to Azure Blob storage via SDK. From a high level, the log collection feature works as follows:
-1. The user creates a new log operation using the Device Update user interface or APIs, targeting up to 100 devices that have implemented the Diagnostics Interface.
+1. The user creates a new log operation using the Device Update user interface or APIs, targeting up to 100 devices that have implemented the Diagnostics interface.
-2. The DU service sends a log collection start message to the targeted devices using the Diagnostics Interface. This start message includes the log operation ID and a SAS token for uploading to the associated Azure Storage account.
+2. The Device Update service sends a log collection start message to the targeted devices using the Diagnostics interface. This start message includes the log operation ID and a SAS token for uploading to the associated Azure Storage account.
-3. Upon receiving the start message, the DU agent of the targeted device will attempt to collect and upload the files in the pre-defined filepath(s) specified in the on-device agent configuration file. The DU reference agent is configured to upload the DU Agent diagnostic log ("aduc.log"), and the DO Agent diagnostic log ("do-agent.log") by default.
+3. Upon receiving the start message, the Device Update agent of the targeted device attempts to collect and upload the files in the pre-defined filepath(s) specified in the on-device agent configuration file. The Device Update reference agent is configured to upload the Device Update agent diagnostic log (`aduc.log`), and the DO Agent diagnostic log ("do-agent.log") by default.
-4. The DU agent then reports the state of the operation ("Succeeded" / "Failed") back to the service, including the log operation ID, a ResultCode, and an ExtendedResultCode. If the DU Agent fails a log operation, it will automatically attempt to retry three times, reporting only the final state back to the service.
+4. The Device Update agent then reports the state of the operation (either **Succeeded** or **Failed**) back to the service, including the log operation ID, a ResultCode, and an ExtendedResultCode. If the Device Update agent fails a log operation, it will automatically attempt to retry three times, reporting only the final state back to the service.
-5. Once all targeted devices have reported their terminal state back to the DU service, the DU service marks the log operation as "Succeeded" or "Failed." "Succeeded" indicates that all targeted devices successfully completed the log operation. "Failed" indicates that at least one targeted device failed the log operation.
+5. Once all targeted devices have reported their terminal state back to the Device Update service, the Device Update service marks the log operation as either **Succeeded** or **Failed**. A successful log operation indicates that all targeted devices successfully completed the log operation. A failed log operation indicates that at least one targeted device failed the log operation.
- > [!NOTE]
- > Since the log operation is carried out in parallel by the targeted devices, it is possible that some targeted devices successfully uploaded logs, but the overall log operation is marked as "Failed." You can see which devices succeeded and which failed by viewing the log operation details through the user interface or APIs.
-## Next steps
+ > [!NOTE]
+ > Since the log operation is carried out in parallel by the targeted devices, it is possible that some targeted devices successfully uploaded logs, but the overall log operation is marked as failed. You can see which devices succeeded and which failed by viewing the log operation details through the user interface or APIs.
-Learn how to use Device Update's remote log collection feature:
-
+## Next steps
+Learn how to use Device Update's remote log collection feature: [Remotely collect diagnostic logs from devices using Device Update for IoT Hub](device-update-log-collection.md)
iot-hub-device-update Device Update Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-limits.md
# Device Update for IoT Hub limits
-This document provides an overview of the various limits that are imposed on the Device Update for IoT Hub resource and its associated operations. It also indicates whether the limits are adjustable by contacting Microsoft Support or not.
+This document provides an overview of the various limits that are imposed on the Device Update for IoT Hub resource and its associated operations. It also indicates whether the limits are adjustable by contacting Microsoft Support or not.
## Preview limits
-During preview, the Device Update for IoT Hub service is provided at no cost to customers. More restrictive limits are imposed during the service's preview offering. These limits
-are expected to change once the service is Generally Available.
+During preview, the Device Update for IoT Hub service is provided at no cost to customers. More restrictive limits are imposed during the service's preview offering. These limits are expected to change once the service is generally available.
[!INCLUDE [device-update-for-iot-hub-limits](../../includes/device-update-for-iot-hub-limits.md)] ## Next steps
-
+ - [Create a Device Update for IoT Hub account](create-device-update-account.md) - [Troubleshoot common Device Update for IoT Hub issues](troubleshoot-device-update.md)
iot-hub-device-update Device Update Multi Step Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-multi-step-updates.md
-# Multi-Step Ordered Execution
-Based on customer requests we have added the ability to run pre-install and post-install tasks when deploying an over-the-air update. This capability is called Multi-Step Ordered Execution (MSOE) and is part of the Public Preview Refresh Update Manifest v4 schema.
+# Multi-step ordered execution
-See the [Update Manifest](update-manifest.md) documentation before reviewing the following changes as part of the Public Preview Refresh release.
+Multi-step ordered execution gives you the ability to run pre-install and post-install tasks when deploying an over-the-air update. This capability is part of the Public Preview Refresh Update Manifest v4 schema.
-With MSOE we have introduced are two types of Steps:
+See the [Update Manifest](update-manifest.md) documentation before reviewing the following changes as part of the public preview refresh release.
-- Inline Step (Default)-- Reference Step
+With multi-step ordered execution there are two types of steps:
-Example Update Manifest with one Inline Step:
+- Inline step (default)
+- Reference step
+
+An example update manifest with one inline step:
```json {
Example Update Manifest with one Inline Step:
} ```
-Example Update Manifest with two Inline Steps:
+An example update manifest with two inline steps:
```json {
Example Update Manifest with two Inline Steps:
} ```
-Example Update Manifest with one Reference Step:
--- Parent Update-
-```json
-{
- "updateId": {...},
- "isDeployable": true,
- "compatibility": [
- {
- "deviceManufacturer": "du-device",
- "deviceModel": "e2e-test"
- }
- ],
- "instructions": {
- "steps": [
- {
- "type": "reference",
- "description": "Cameras Firmware Update",
- "updateId": {
- "provider": "contoso",
- "name": "virtual-camera",
- "version": "1.2"
- }
- }
- ]
- },
- "manifestVersion": "4.0",
- "importedDateTime": "2021-11-17T07:26:14.7484389Z",
- "createdDateTime": "2021-11-17T07:22:10.6014567Z"
-}
-```
--- Child Update-
-```json
-{
- "updateId": {
- "provider": "contoso",
- "name": "virtual-camera",
- "version": "1.2"
- },
- "isDeployable": false,
- "compatibility": [
- {
- "group": "cameras"
- }
- ],
- "instructions": {
- "steps": [
- {
- "description": "Cameras Update - pre-install step",
- "handler": "microsoft/script:1",
- "files": [
- "contoso-camera-installscript.sh"
- ],
- "handlerProperties": {
- "scriptFileName": "contoso-camera-installscript.sh",
- "arguments": "--pre-install-sim-success --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
- "installedCriteria": "contoso-virtual-camera-1.2-step-0"
- }
- },
- {
- "description": "Cameras Update - firmware installation (failure - missing file)",
- "handler": "microsoft/script:1",
- "files": [
- "contoso-camera-installscript.sh",
- "camera-firmware-1.1.json"
- ],
- "handlerProperties": {
- "scriptFileName": "missing-contoso-camera-installscript.sh",
- "arguments": "--firmware-file camera-firmware-1.1.json --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
- "installedCriteria": "contoso-virtual-camera-1.2-step-1"
- }
- },
- {
- "description": "Cameras Update - post-install step",
- "handler": "microsoft/script:1",
- "files": [
- "contoso-camera-installscript.sh"
- ],
- "handlerProperties": {
- "scriptFileName": "contoso-camera-installscript.sh",
- "arguments": "--post-install-sim-success --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
- "installedCriteria": "contoso-virtual-camera-1.2-stop-2"
- }
- }
- ]
- },
- "referencedBy": [
- {
- "provider": "DU-Client-Eng",
- "name": "MSOE-Update-Demo",
- "version": "3.1"
- }
- ],
- "manifestVersion": "4.0",
- "importedDateTime": "2021-11-17T07:26:14.7376536Z",
- "createdDateTime": "2021-11-17T07:22:09.2232968Z",
- "etag": "\"ad7a553d-24a8-492b-9885-9af424d44d58\""
-}
-```
+An example update manifest with one reference step:
+
+- The parent update that references a child update
+
+ ```json
+ {
+ "updateId": {...},
+ "isDeployable": true,
+ "compatibility": [
+ {
+ "deviceManufacturer": "du-device",
+ "deviceModel": "e2e-test"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "type": "reference",
+ "description": "Cameras Firmware Update",
+ "updateId": {
+ "provider": "contoso",
+ "name": "virtual-camera",
+ "version": "1.2"
+ }
+ }
+ ]
+ },
+ "manifestVersion": "4.0",
+ "importedDateTime": "2021-11-17T07:26:14.7484389Z",
+ "createdDateTime": "2021-11-17T07:22:10.6014567Z"
+ }
+ ```
+
+- The child update with inline steps
+
+ ```json
+ {
+ "updateId": {
+ "provider": "contoso",
+ "name": "virtual-camera",
+ "version": "1.2"
+ },
+ "isDeployable": false,
+ "compatibility": [
+ {
+ "group": "cameras"
+ }
+ ],
+ "instructions": {
+ "steps": [
+ {
+ "description": "Cameras Update - pre-install step",
+ "handler": "microsoft/script:1",
+ "files": [
+ "contoso-camera-installscript.sh"
+ ],
+ "handlerProperties": {
+ "scriptFileName": "contoso-camera-installscript.sh",
+ "arguments": "--pre-install-sim-success --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
+ "installedCriteria": "contoso-virtual-camera-1.2-step-0"
+ }
+ },
+ {
+ "description": "Cameras Update - firmware installation (failure - missing file)",
+ "handler": "microsoft/script:1",
+ "files": [
+ "contoso-camera-installscript.sh",
+ "camera-firmware-1.1.json"
+ ],
+ "handlerProperties": {
+ "scriptFileName": "missing-contoso-camera-installscript.sh",
+ "arguments": "--firmware-file camera-firmware-1.1.json --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
+ "installedCriteria": "contoso-virtual-camera-1.2-step-1"
+ }
+ },
+ {
+ "description": "Cameras Update - post-install step",
+ "handler": "microsoft/script:1",
+ "files": [
+ "contoso-camera-installscript.sh"
+ ],
+ "handlerProperties": {
+ "scriptFileName": "contoso-camera-installscript.sh",
+ "arguments": "--post-install-sim-success --component-name --component-name-val --component-group --component-group-val --component-prop path --component-prop-val path",
+ "installedCriteria": "contoso-virtual-camera-1.2-stop-2"
+ }
+ }
+ ]
+ },
+ "referencedBy": [
+ {
+ "provider": "DU-Client-Eng",
+ "name": "MSOE-Update-Demo",
+ "version": "3.1"
+ }
+ ],
+ "manifestVersion": "4.0",
+ "importedDateTime": "2021-11-17T07:26:14.7376536Z",
+ "createdDateTime": "2021-11-17T07:22:09.2232968Z",
+ "etag": "\"ad7a553d-24a8-492b-9885-9af424d44d58\""
+ }
+ ```
> [!NOTE]
-> In the [update manifest](/azure/iot-hub-device-update/update-manifest), each step should have different ΓÇ£installedCriteriaΓÇ¥ string if that string is being used to determine whether the step should be performed or not.
+> In the [update manifest](update-manifest.md), each step should have a different **installedCriteria** string if that string is being used to determine whether the step should be performed or not.
-## Parent Update vs. Child Update
+## Parent updates and child updates
-For Public Preview Refresh, we will refer to the top-level Update Manifest as `Parent Update` and refer to an Update Manifest specified in a Reference Step as `Child Update`.
+When update manifests reference each other, the top-level manifest is called the **parent update** and a manifest specified in a reference step is called a **child update**.
-Currently, a `Child Update` must not contain any reference steps. This restriction is validated at import time and if not followed the import will fail.
+Currently, a child update can't contain any reference steps. This restriction is validated at import time and if not followed the import will fail.
-### Inline Step In Parent Update
+### Inline steps in a parent update
-Inline step(s) specified in `Parent Update` will be applied to the Host Device. Here the ADUC_WorkflowData object that is passed to a Step Handler (also known as Update Content Handler) and it will not contain the `Selected Components` data. The handler for this type of step should *not* be a `Component-Aware` handler.
+Inline step(s) specified in a parent update are applied to the host device. Here the ADUC_WorkflowData object that is passed to a step handler (also known as an update content handler) and it will not contain the `Selected Components` data. The handler for this type of step should *not* be a `Component-Aware` handler.
-> [!NOTE]
-> See [Steps Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler/README.md) and [Implementing a custom component-Aware Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md) for more details.
+The steps content handler applies **IsInstalled** validation logic for each step. The Device Update agentΓÇÖs step handler checks to see if particular update is already installed by checking whether IsInstalled() resulted in a result code ΓÇ£900ΓÇ¥ which means ΓÇÿtrueΓÇÖ. If an update is already installed, to avoid reinstalling an update that is already on the device, the DU agent will skip future steps because we use it to determine whether to perform the step or not.
-> [!NOTE]
-> Steps Content Handler:
-> IsInstalled validation logic for each step: The Device Update agentΓÇÖs [step handler](https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/steps_handler/README.md) checks to see if particular update is already installed i.e., checks for IsInstalled() resulted in a result code ΓÇ£900ΓÇ¥ which means ΓÇÿtrueΓÇÖ. If an update is already installed, to avoid re-installing an update that is already on the device, the DU agent will skip future steps because we use it to determine whether to perform the step or not.
-> Reporting an update result: The result of a step handler execution must be written to ADUC_Result struct in a desired result file as specified in --result-file option [learn more](https://github.com/Azure/iot-hub-device-update/blob/main/src/content_handlers/steps_handler/README.md#steps-content-handler). Then based on results of the execution, for success return 0, for any fatal errors return -1 or 0xFF.
+To report an update result, the result of a step handler execution must be written to ADUC_Result struct in a desired result file as specified in --result-file option. Then based on results of the execution, for success return 0, for any fatal errors return -1 or 0xFF.
-### Reference Step In Parent Update
+For more information, see [Steps content handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers/steps_handler/README.md) and [Implementing a custom component-aware content handler](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/how-to-implement-custom-update-handler.md).
-Reference step(s) specified in `Parent Update` will be applied to the component on or components connected to the Host Device. A **Reference Step** is a step that contains update identifier of another Update, called as a `Child Update`. When processing a Reference Step, the Steps Handler will download a Detached Update Manifest file specified in the Reference Step data, then validate the file integrity.
+### Reference steps in a parent update
-Next, the Steps Handler will parse the Child Update Manifest and create ADUC_Workflow object (also known as Child Workflow Data) by combining the data from Child Update Manifest and File URLs information from the Parent Update Manifest. This Child Workflow Data also has a 'level' property set to '1'.
+Reference step(s) specified in a parent update are applied to components on or connected to the host device. A **reference step** is a step that contains update identifier of another update, called a child update.
-> [!NOTE]
-> For Update Manfiest version v4, the Child Udpate cannot contain any Reference Steps.
+When processing a reference step, the steps handler downloads a detached update manifest file specified in the reference step data, then validates the file integrity. Next, the steps handler parses the child update manifest and creates an **ADUC_Workflow** object (also known as child workflow data) by combining the data from the child update manifest and file URLs information from the parent update manifest. This child workflow data also has a 'level' property set to '1'.
-## Detached Update Manifest
+> [!NOTE]
+> Currently, child updates can't contain any reference steps.
-To avoid deployment failure because of IoT Hub Twin Data Size Limits, any large Update Manifest will be delivered in the form of a JSON data file, also called as a 'Detached Update Manifest'.
+## Detached update manifests
-If an update with large content is imported into Device Update for IoT Hub, the generated Update Manifest will contain another payload file called `Detached Update Manifest`, which contains the full data of the Update Manifest.
+To avoid deployment failure because of IoT Hub twin data size limits, any large update manifest will be delivered in the form of a JSON data file, also called a **detached update manifest**.
-The `UpdateManifest` property in the Device or Module Twin will contain the Detached Update Manifest file information.
+If an update with large content is imported into Device Update for IoT Hub, the generated update manifest will contain another payload file called `Detached Update Manifest`, which contains the full data of the Update Manifest.
-When processing PnP Property Changed Event, the Device Update Agent will automatically download the Detached Update Manifest file, and create ADUC_WorkflowData object that contains the full Update Manifest data.
+The `UpdateManifest` property in the device or module twin will contain the detached update manifest file information.
-
+When processing PnP property changed event, the Device Update agent will automatically download the detached update manifest file and create an ADUC_WorkflowData object that contains the full update manifest data.
iot-hub-device-update Device Update Proxy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-proxy-updates.md
-# Proxy Updates and multi-component updating
+# Proxy updates and multi-component updating
-With Proxy updates, you can (1) target over-the-air updates to multiple components on the IoT device or (2) target over-the-air updates to multiple sensors connected to the IoT device. Use cases where proxy updates is applicable include:
+With proxy updates, you can target over-the-air updates to multiple components on the IoT device or to multiple sensors connected to the IoT device. Use cases where proxy updates are applicable include:
* Targeting specific update files to different partitions on the device. * Targeting specific update files to different apps/components on the device
-* Targeting specific update files to sensors connected to an IoT devices. These sensors could be connected to the IoT device over a network protocol (for example, USB, CANbus etc.).
+* Targeting specific update files to sensors connected to an IoT device. These sensors could be connected to the IoT device over a network protocol (for example, USB, CANbus etc.).
-## Prerequisite
-In order to update a component or components that connected to a target IoT Device, the device builder must register a custom **Component Enumerator Extension** that is built specifically for their IoT devices. The Component Enumerator Extension is required so that the Device Update Agent can map a **'child update'** with a specific component, or group of components, which the update is intended for. See [Contoso Component Enumerator](components-enumerator.md) for an example on how to implement and register a custom Component Enumerator extension.
+## Prerequisites
+
+In order to update a component or components that connect to a target IoT device, the device builder must register a custom **component enumerator extension** that is built specifically for their IoT devices. The component enumerator extension is required so that the Device Update agent can map a **child update** with a specific component, or group of components, which the update is intended for. See [Contoso component enumerator](components-enumerator.md) for an example on how to implement and register a custom component enumerator extension.
> [!NOTE]
-> Device Update *service* does not know anything about **component(s)** on the target device. Only the Device Update agent does the above mapping.
+> The Device Update service does not know anything about component(s) on the target device. Only the Device Update agent is aware of the mapping from the component enumerator.
+
+## Multi-step ordered execution
+
+Multi-step ordered execution feature allows for granular update controls including an install order, pre-install, install, and post-install steps. For example, this feature could enable a required pre-install check that is needed to validate the device state before starting an update. Learn more about [multi-step ordered execution](device-update-multi-step-updates.md).
-## Example Proxy update
-In the following example, we will demonstrate how to do a Proxy update and use the multi-step ordered execution feature introduced in the Public Preview Refresh Release. Multi-step ordered execution feature allows for granular update controls including an install order, pre-install, install, and post-install steps. Use cases include, for example, a required preinstall check that is needed to validate the device state before starting an update, etc. Learn more about [multi-step ordered execution](device-update-multi-step-updates.md).
+## Next steps
See this tutorial on how to do a [Proxy update using the Device Update agent](device-update-howto-proxy-updates.md) with sample updates for components connected to a Contoso Virtual Vacuum device.
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
IoT Hub enables devices to communicate with the IoT Hub device endpoints using:
* [MQTT v3.1.1](https://mqtt.org/) on port 8883 * MQTT v3.1.1 over WebSocket on port 443.
-IoT Hub is not a full-featured MQTT broker and does not support all the behaviors specified in the MQTT v3.1.1 standard. This article describes how devices can use supported MQTT behaviors to communicate with IoT Hub.
+IoT Hub isn't a full-featured MQTT broker and doesn't support all the behaviors specified in the MQTT v3.1.1 standard. This article describes how devices can use supported MQTT behaviors to communicate with IoT Hub.
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
The MQTT port (8883) is blocked in many corporate and educational networking env
## Using the device SDKs
-[Device SDKs](https://github.com/Azure/azure-iot-sdks) that support the MQTT protocol are available for Java, Node.js, C, C#, and Python. The device SDKs use the choosen [authentication mechanism](iot-concepts-and-iot-hub.md#device-identity-and-authentication) to establish a connection to an IoT hub. To use the MQTT protocol, the client protocol parameter must be set to **MQTT**. You can also specify MQTT over Web Sockets in the client protocol parameter. By default, the device SDKs connect to an IoT Hub with the **CleanSession** flag set to **0** and use **QoS 1** for message exchange with the IoT hub. While it's possible to configure **QoS 0** for faster message exchange, you should note that the delivery isn't guaranteed nor acknowledged. For this reason, **QoS 0** is often referred as "fire and forget".
+[Device SDKs](https://github.com/Azure/azure-iot-sdks) that support the MQTT protocol are available for Java, Node.js, C, C#, and Python. The device SDKs use the chosen [authentication mechanism](iot-concepts-and-iot-hub.md#device-identity-and-authentication) to establish a connection to an IoT hub. To use the MQTT protocol, the client protocol parameter must be set to **MQTT**. You can also specify MQTT over Web Sockets in the client protocol parameter. By default, the device SDKs connect to an IoT Hub with the **CleanSession** flag set to **0** and use **QoS 1** for message exchange with the IoT hub. While it's possible to configure **QoS 0** for faster message exchange, you should note that the delivery isn't guaranteed nor acknowledged. For this reason, **QoS 0** is often referred as "fire and forget".
When a device is connected to an IoT hub, the device SDKs provide methods that enable the device to exchange messages with an IoT hub.
In order to ensure a client/IoT Hub connection stays alive, both the service and
|C# | 300 seconds* | [Yes](/dotnet/api/microsoft.azure.devices.client.transport.mqtt.mqtttransportsettings.keepaliveinseconds) | |Python | 60 seconds | [Yes](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/azure/iot/device/iothub/abstract_clients.py#L339) |
-> *The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds but in reality the SDK sends a ping request 4 times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
+> *The C# SDK defines the default value of the MQTT KeepAliveInSeconds property as 300 seconds but in reality the SDK sends a ping request four times per keep-alive duration set. This means the SDK sends a keep-alive ping every 75 seconds.
Following the [MQTT spec](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718081), IoT Hub's keep-alive ping interval is 1.5 times the client keep-alive value. However, IoT Hub limits the maximum server-side timeout to 29.45 minutes (1767 seconds) because all Azure services are bound to the Azure load balancer TCP idle timeout, which is 29.45 minutes.
The maximum client keep-alive value you can set is `1767 / 1.5 = 1177` seconds.
### Migrating a device app from AMQP to MQTT
-If you are using the [device SDKs](https://github.com/Azure/azure-iot-sdks), switching from using AMQP to MQTT requires changing the protocol parameter in the client initialization, as stated previously.
+If you're using the [device SDKs](https://github.com/Azure/azure-iot-sdks), switching from using AMQP to MQTT requires changing the protocol parameter in the client initialization, as stated previously.
When doing so, make sure to check the following items: * AMQP returns errors for many conditions, while MQTT terminates the connection. As a result your exception handling logic might require some changes.
-* MQTT does not support the *reject* operations when receiving [cloud-to-device messages](iot-hub-devguide-messaging.md). If your back-end app needs to receive a response from the device app, consider using [direct methods](iot-hub-devguide-direct-methods.md).
+* MQTT doesn't support the *reject* operations when receiving [cloud-to-device messages](iot-hub-devguide-messaging.md). If your back-end app needs to receive a response from the device app, consider using [direct methods](iot-hub-devguide-direct-methods.md).
-* AMQP is not supported in the Python SDK.
+* AMQP isn't supported in the Python SDK.
## Example in C using MQTT without an Azure IoT SDK
This repository contains:
This folder contains two samples commands used with mosquitto_pub utility tool provided by Mosquitto.org.
-* Mosquitto_sendmessage: to send a simple text message to an Azure IoT hub acting as a device.
+* Mosquitto_sendmessage: to send a text message to an IoT hub acting as a device.
-* Mosquitto_subscribe: to see events occurring in an Azure IoT hub.
+* Mosquitto_subscribe: to see events occurring in an IoT hub.
## Using the MQTT protocol directly (as a device)
-If a device cannot use the device SDKs, it can still connect to the public device endpoints using the MQTT protocol on port 8883. In the **CONNECT** packet, the device should use the following values:
+If a device can't use the device SDKs, it can still connect to the public device endpoints using the MQTT protocol on port 8883. In the **CONNECT** packet, the device should use the following values:
* For the **ClientId** field, use the **deviceId**.
If a device cannot use the device SDKs, it can still connect to the public devic
`contoso.azure-devices.net/MyDevice01/?api-version=2021-04-12`
- It's strongly recommended to include api-version in the field. Otherwise it could cause unexpected behaviors.
+ It's recommended to include api-version in the field. Otherwise it could cause unexpected behaviors.
* For the **Password** field, use a SAS token. The format of the SAS token is the same as for both the HTTPS and AMQP protocols:
If a device cannot use the device SDKs, it can still connect to the public devic
`SharedAccessSignature sr={iotHub-hostname}%2Fdevices%2FMyDevice01%2Fapi-version%3D2016-11-14&sig=vSgHBMUG.....Ntg%3d&se=1456481802`
-The device app can specify a **Will** message in the **CONNECT** packet. The device app should use `devices/{device-id}/messages/events/` or `devices/{device-id}/messages/events/{property-bag}` as the **Will** topic name to define **Will** messages to be forwarded as a telemetry message. In this case, if the network connection is closed, but a **DISCONNECT** packet was not previously received from the device, then IoT Hub sends the **Will** message supplied in the **CONNECT** packet to the telemetry channel. The telemetry channel can be either the default **Events** endpoint or a custom endpoint defined by IoT Hub routing. The message has the **iothub-MessageType** property with a value of **Will** assigned to it.
+The device app can specify a **Will** message in the **CONNECT** packet. The device app should use `devices/{device-id}/messages/events/` or `devices/{device-id}/messages/events/{property-bag}` as the **Will** topic name to define **Will** messages to be forwarded as a telemetry message. In this case, if the network connection is closed, but a **DISCONNECT** packet wasn't previously received from the device, then IoT Hub sends the **Will** message supplied in the **CONNECT** packet to the telemetry channel. The telemetry channel can be either the default **Events** endpoint or a custom endpoint defined by IoT Hub routing. The message has the **iothub-MessageType** property with a value of **Will** assigned to it.
## Using the MQTT protocol directly (as a module)
Connecting to IoT Hub over MQTT using a module identity is similar to the device
* The twin status topic is identical for modules and devices.
-For more information about using MQTT with modules, see [Publish and subscribe with IoT Edge](../iot-edge/how-to-publish-subscribe.md) and learn more about the [Edge Hub MQTT endpoint](https://github.com/Azure/iotedge/blob/main/doc/edgehub-api.md#edge-hub-mqtt-endpoint).
+For more information about using MQTT with modules, see [Publish and subscribe with IoT Edge](../iot-edge/how-to-publish-subscribe.md) and learn more about the [IoT Edge hub MQTT endpoint](https://github.com/Azure/iotedge/blob/main/doc/edgehub-api.md#edge-hub-mqtt-endpoint).
## TLS/SSL configuration
client.publish("devices/" + device_id + "/messages/events/", '{"id":123}', qos=1
client.loop_forever() ```
-To authenticate using a device certificate, update the code snippet above with the following changes (see [How to get an X.509 CA certificate](./iot-hub-x509ca-overview.md#how-to-get-an-x509-ca-certificate) on how to prepare for certificate-based authentication):
+To authenticate using a device certificate, update the code snippet above with the following changes (see [How to get an X.509 CA certificate](./iot-hub-x509ca-overview.md#get-an-x509-ca-certificate) on how to prepare for certificate-based authentication):
```python # Create the client as before
client.connect(iot_hub_name+".azure-devices.net", port=8883)
## Sending device-to-cloud messages
-After making a successful connection, a device can send messages to IoT Hub using `devices/{device-id}/messages/events/` or `devices/{device-id}/messages/events/{property-bag}` as a **Topic Name**. The `{property-bag}` element enables the device to send messages with additional properties in a url-encoded format. For example:
+After successfully connecting, a device can send messages to IoT Hub using `devices/{device-id}/messages/events/` or `devices/{device-id}/messages/events/{property-bag}` as a **Topic Name**. The `{property-bag}` element enables the device to send messages with additional properties in a url-encoded format. For example:
```text RFC 2396-encoded(<PropertyName1>)=RFC 2396-encoded(<PropertyValue1>)&RFC 2396-encoded(<PropertyName2>)=RFC 2396-encoded(<PropertyValue2>)…
RFC 2396-encoded(<PropertyName1>)=RFC 2396-encoded(<PropertyValue1>)&RFC 2396-en
The following is a list of IoT Hub implementation-specific behaviors:
-* IoT Hub does not support QoS 2 messages. If a device app publishes a message with **QoS 2**, IoT Hub closes the network connection.
+* IoT Hub doesn't support QoS 2 messages. If a device app publishes a message with **QoS 2**, IoT Hub closes the network connection.
-* IoT Hub does not persist Retain messages. If a device sends a message with the **RETAIN** flag set to 1, IoT Hub adds the **mqtt-retain** application property to the message. In this case, instead of persisting the retain message, IoT Hub passes it to the backend app.
+* IoT Hub doesn't persist Retain messages. If a device sends a message with the **RETAIN** flag set to 1, IoT Hub adds the **mqtt-retain** application property to the message. In this case, instead of persisting the retain message, IoT Hub passes it to the backend app.
* IoT Hub only supports one active MQTT connection per device. Any new MQTT connection on behalf of the same device ID causes IoT Hub to drop the existing connection and **400027 ConnectionForcefullyClosedOnNewConnection** will be logged into IoT Hub Logs
For more information, see [Messaging developer's guide](iot-hub-devguide-messagi
## Receiving cloud-to-device messages
-To receive messages from IoT Hub, a device should subscribe using `devices/{device-id}/messages/devicebound/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive additional properties in the topic name. IoT Hub does not allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since IoT Hub is not a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters.
+To receive messages from IoT Hub, a device should subscribe using `devices/{device-id}/messages/devicebound/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive additional properties in the topic name. IoT Hub doesn't allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since IoT Hub isn't a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters.
The device does not receive any messages from IoT Hub until it has successfully subscribed to its device-specific endpoint, represented by the `devices/{device-id}/messages/devicebound/#` topic filter. After a subscription has been established, the device receives cloud-to-device messages that were sent to it after the time of the subscription. If the device connects with **CleanSession** flag set to **0**, the subscription is persisted across different sessions. In this case, the next time the device connects with **CleanSession 0** it receives any outstanding messages sent to it while disconnected. If the device uses **CleanSession** flag set to **1** though, it does not receive any messages from IoT Hub until it subscribes to its device-endpoint.
For more information, see the [Device twins developer's guide](iot-hub-devguide-
## Update device twin's reported properties
-To update reported properties, the device issues a request to IoT Hub via a publication over a designated MQTT topic. After processing the request, IoT Hub responds the success or failure status of the update operation via a publication to another topic. This topic can be subscribed by the device in order to notify it about the result of its twin update request. To implement this type of request/response interaction in MQTT, we leverage the notion of request ID (`$rid`) provided initially by the device in its update request. This request ID is also included in the response from IoT Hub to allow the device to correlate the response to its particular earlier request.
+To update reported properties, the device issues a request to IoT Hub via a publication over a designated MQTT topic. After processing the request, IoT Hub responds the success or failure status of the update operation via a publication to another topic. This topic can be subscribed by the device in order to notify it about the result of its twin update request. To implement this type of request/response interaction in MQTT, we use the notion of request ID (`$rid`) provided initially by the device in its update request. This request ID is also included in the response from IoT Hub to allow the device to correlate the response to its particular earlier request.
The following sequence describes how a device updates the reported properties in the device twin in IoT Hub:
The following sequence describes how a device updates the reported properties in
3. The service then sends a response message that contains the new ETag value for the reported properties collection on topic `$iothub/twin/res/{status}/?$rid={request-id}`. This response message uses the same **request ID** as the request.
-The request message body contains a JSON document, that contains new values for reported properties. Each member in the JSON document updates or add the corresponding member in the device twin's document. A member set to `null` deletes the member from the containing object. For example:
+The request message body contains a JSON document that contains new values for reported properties. Each member in the JSON document updates or add the corresponding member in the device twin's document. A member set to `null` deletes the member from the containing object. For example:
```json {
When a device is connected, IoT Hub sends notifications to the topic `$iothub/tw
} ```
-As for property updates, `null` values mean that the JSON object member is being deleted. Also, note that `$version` indicates the new version of the desired properties section of the twin.
+As for property updates, `null` values mean that the JSON object member is being deleted. Also, `$version` indicates the new version of the desired properties section of the twin.
> [!IMPORTANT] > IoT Hub generates change notifications only when devices are connected. Make sure to implement the [device reconnection flow](iot-hub-devguide-device-twins.md#device-reconnection-flow) to keep the desired properties synchronized between IoT Hub and the device app.
iot-hub Iot Hub X509ca Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509ca-concept.md
Previously updated : 09/18/2017 Last updated : 08/26/2022
-# Conceptual understanding of X.509 CA certificates in the IoT industry
-This article describes the value of using X.509 certificate authority (CA) certificates in IoT device manufacturing and authentication to IoT Hub. It includes information about supply chain setup and highlight advantages.
+# Understand how X.509 CA certificates are used in IoT
-This article describes:
+This article describes the value of using X.509 certificate authority (CA) certificates in IoT device manufacturing and authentication.
-* What X.509 CA certificates are and how to get them
-
-* How to register your X.509 CA certificate to IoT Hub
-
-* How to set up a manufacturing supply chain for X.509 CA-based authentication
-
-* How devices signed with X.509 CA connect to IoT Hub
+An X.509 CA certificate is a digital certificate that can sign other certificates. A digital certificate is considered X.509 if it conforms to the certificate formatting standard prescribed by IETF's RFC 5280 standard. And a certificate authority (CA) means that its holder can sign other certificates.
[!INCLUDE [iot-hub-include-x509-ca-signed-support-note](../../includes/iot-hub-include-x509-ca-signed-support-note.md)]
-## Overview
+## Benefits of X.509 CA certificate authentication
-X.509 Certificate Authority (CA) authentication is an approach for authenticating devices to IoT Hub using a method that dramatically simplifies device identity creation and life-cycle management in the supply chain.
+X.509 certificate authority (CA) authentication is an approach for authenticating devices to IoT Hub using a method that dramatically simplifies device identity creation and life-cycle management in the supply chain.
-A distinguishing attribute of the X.509 CA authentication is a one-to-many relationship a CA certificate has with its downstream devices. This relationship enables registration of any number of devices into IoT Hub by registering an X.509 CA certificate once, otherwise device unique certificates must be pre-registered for every device before a device can connect. This one-to-many relationship also simplifies device certificates life-cycle management operations.
+A distinguishing attribute of X.509 CA authentication is the one-to-many relationship that a CA certificate has with its downstream devices. This relationship enables registration of any number of devices into IoT Hub by registering an X.509 CA certificate once. Otherwise, unique certificates would have to be pre-registered for every device before a device can connect. This one-to-many relationship also simplifies device certificates lifecycle management operations.
-Another important attribute of the X.509 CA authentication is simplification of supply chain logistics. Secure authentication of devices requires that each device holds a unique secret like a key as basis for trust. In certificates-based authentication, this secret is a private key. A typical device manufacturing flow involves multiple steps and custodians. Securely managing device private keys across multiple custodians and maintaining trust is difficult and expensive. Using certificate authorities solves this problem by signing each custodian into a cryptographic chain of trust rather than entrusting them with device private keys. Each custodian in turn signs devices at their respective process step of the manufacturing flow. The overall result is an optimal supply chain with built-in accountability through use of the cryptographic chain of trust. It is worth noting that this process yields the most security when devices protect their unique private keys. To this end, we urge the use of Hardware Secure Modules (HSM) capable of internally generating private keys that will never see the light of day.
+Another important attribute of X.509 CA authentication is simplification of supply chain logistics. Secure authentication of devices requires that each device holds a unique secret like a key as the basis for trust. In certificate-based authentication, this secret is a private key. A typical device manufacturing flow involves multiple steps and custodians. Securely managing device private keys across multiple custodians and maintaining trust is difficult and expensive. Using certificate authorities solves this problem by signing each custodian into a cryptographic chain of trust rather than entrusting them with device private keys. Each custodian signs devices at their respective step of the manufacturing flow. The overall result is an optimal supply chain with built-in accountability through use of the cryptographic chain of trust.
-This article offers an end-to-end view of using the X.509 CA authentication, from supply chain setup to device connection, while making use of a real world example to solidify understanding.
+This process yields the most security when devices protect their unique private keys. To this end, we recommend using Hardware Secure Modules (HSM) capable of internally generating private keys that will never see the light of day.
-You can also use enrollment groups with the Azure IoT Hub Device Provisioning Service (DPS) to handle provisioning of devices to hubs. For more information on using DPS to provision X.509 certificate devices, see [Tutorial: Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
+The Azure IoT Hub Device Provisioning Service (DPS) makes it easy to provision groups of devices to hubs. For more information, see [Tutorial: Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
-## Introduction
+## Example scenario
-The X.509 CA certificate is a digital certificate whose holder can sign other certificates. This digital certificate is X.509 because it conforms to a certificate formatting standard prescribed by IETF's RFC 5280 standard, and is a certificate authority (CA) because its holder can sign other certificates.
+Company-X makes Smart-X-Widgets that are designed for professional installation. Company-X outsources both manufacturing and installation. Factory-Y manufactures the Smart-X-Widgets and Technician-Z installs them. Company-X wants the Smart-X-Widget shipped directly from Factory-Y to Technician-Z for installation and then for it to connect directly to Company-X's instance of IoT Hub. To make this happen, Company-X need to complete a few one-time setup operations to prime Smart-X-Widget for automatic connection. This article discusses the steps involved in that end-to-end scenario:
-The use of X.509 CA is best understood in relation to a concrete example. Consider Company-X, a maker of Smart-X-Widgets designed for professional installation. Company-X outsources both manufacturing and installation. It contracts manufacturer Factory-Y to manufacture the Smart-X-Widgets, and service provider Technician-Z to install. Company-X desires that Smart-X-Widget directly ships from Factory-Y to Technician-Z for installation and that it connects directly to Company-X's instance of IoT Hub after installation without further intervention from Company-X. To make this happen, Company-X need to complete a few one-time setup operations to prime Smart-X-Widget for automatic connection. With the end-to-end scenario in mind, the rest of this article is structured as follows:
+1. Acquire the X.509 CA certificate
-* Acquire the X.509 CA certificate
+2. Register the X.509 CA certificate to IoT Hub
-* Register X.509 CA certificate to IoT Hub
+3. Sign devices into a certificate chain of trust
-* Sign devices into a certificate chain of trust
+4. Connect the devices
-* Device connection
+## Acquire the certificate
-## Acquire the X.509 CA certificate
+Company-X can either purchase an X.509 CA certificate from a public root certificate authority or create one through a self-signed process. Either option entails two basic steps: generating a public/private key pair and signing the public key into a certificate.
-Company-X has the option of purchasing an X.509 CA certificate from a public root certificate authority or creating one through a self-signed process. One option would be optimal over the other depending on the application scenario. Regardless of the option, the process entails two fundamental steps, generating a public/private key pair and signing the public key into a certificate.
+Details on how to accomplish these steps differ with various service providers.
![Flow for generating an X509CA certificates](./media/iot-hub-x509ca-concept/csr-flow.png)
-Details on how to accomplish these steps differ with various service providers.
+### Purchasing a certificate
-### Purchasing an X.509 CA certificate
+Purchasing a CA certificate has the benefit of having a well-known root CA act as a trusted third party to vouch for the legitimacy of IoT devices when the devices connect. Choose this option if your devices will interact with third-party products or services.
-Purchasing a CA certificate has the benefit of having a well-known root CA act as a trusted third party to vouch for the legitimacy of IoT devices when the devices connect. Company-X would choose this option if they intend Smart-X-Widget to interact with third-party products or services after initial connection to IoT Hub.
+To purchase an X.509 CA certificate, choose a root certificates service provider. An internet search for the phrase 'Root CA' will yield good leads. The root CA provider will guide you on how to create the public/private key pair and how to generate a certificate signing request (CSR) for their services. A CSR is the formal process of applying for a certificate from a certificate authority. The outcome of this purchase is a certificate for use as an authority certificate. Given the ubiquity of X.509 certificates, the certificate is likely to have been properly formatted to IETF's RFC 5280 standard.
-To purchase an X.509 CA certificate, Company-X would choose a root certificates services provider. An internet search for the phrase 'Root CA' will yield good leads. The root CA will guide Company-X on how to create the public/private key pair and how to generate a Certificate Signing Request (CSR) for their services. A CSR is the formal process of applying for a certificate from a certificate authority. The outcome of this purchase is a certificate for use as an authority certificate. Given the ubiquity of X.509 certificates, the certificate is likely to have been properly formatted to IETF's RFC 5280 standard.
+### Creating a self-signed certificate
-### Creating a Self-Signed X.509 CA certificate
+The process to create a self-signed X.509 CA certificate is similar to purchasing one, except that it doesn't involve a third-party signer like the root certificate authority. In our example, Company-X would sign its authority certificate instead of a root certificate authority.
-The process to create a Self-Signed X.509 CA certificate is similar to purchasing with the exception of involving a third-party signer like the root certificate authority. In our example, Company-X will sign its authority certificate instead of a root certificate authority. Company-X may choose this option for testing until they're ready to purchase an authority certificate. Company-X may also use a self-signed X.509 CA certificate in production, if Smart-X-Widget is not intended to connect to any third-party services outside of the IoT Hub.
+You might choose this option for testing until you're ready to purchase an authority certificate. You could also use a self-signed X.509 CA certificate in production if your devices won't connect to any third-party services outside of IoT Hub.
-## Register the X.509 certificate to IoT Hub
+## Register the certificate to IoT Hub
-Company-X needs to register the X.509 CA to IoT Hub where it will serve to authenticate Smart-X-Widgets as they connect. This is a one-time process that enables the ability to authenticate and manage any number of Smart-X-Widget devices. This is a one-time process because of a one-to-many relationship between CA certificate and device certificates that are signed by the CA certificate or an intermediate certificate. This relationship constitutes one of the main advantages of using the X.509 CA authentication method. The alternative is to upload individual certificate thumbprints for each and every Smart-X-Widget device thereby adding to operational costs.
+Company-X needs to register the X.509 CA to IoT Hub where it will serve to authenticate Smart-X-Widgets as they connect. This one-time process enables the ability to authenticate and manage any number of Smart-X-Widget devices. This is a one-time process because of the one-to-many relationship between CA certificate and device certificates that are signed by the CA certificate or an intermediate certificate. This relationship is one of the main advantages of using the X.509 CA authentication method. The alternative would be to upload individual certificate thumbprints for each and every Smart-X-Widget device, thereby adding to operational costs.
-Registering the X.509 CA certificate is a two-step process, the certificate upload and certificate proof-of-possession.
+Registering the X.509 CA certificate is a two-step process: upload the certificate then provide proof-of-possession.
![Registering an X509CA certificate](./media/iot-hub-x509ca-concept/pop-flow.png)
-### X.509 CA Certificate Upload
+### Certificate upload
-The X.509 CA certificate upload process is just that, upload the CA certificate to IoT Hub. IoT Hub expects the certificate in a file. Company-X simply uploads the certificate file. The certificate file MUST NOT under any circumstances contain any private keys. Best practices from standards governing Public Key Infrastructure (PKI) mandates that knowledge of Company-X's private in this case resides exclusively within Company-X.
+The X.509 CA certificate upload process is just that: uploading the CA certificate to IoT Hub. IoT Hub expects the certificate in a file.
-### Proof-of-Possession of the Certificate
+The certificate file must not under any circumstances contain any private keys. Best practices from standards governing Public Key Infrastructure (PKI) mandates that knowledge of Company-X's private key resides exclusively within Company-X.
-The X.509 CA certificate, just like any digital certificate, is public information that is susceptible to eavesdropping. As such, an eavesdropper may intercept a certificate and try to upload it as their own. In our example, IoT Hub would like to make sure that the CA certificate Company-X is uploading really belongs to Company-X. It does so by challenging Company-X to prove that they in fact possess the certificate through a [proof-of-possession (PoP) flow](https://tools.ietf.org/html/rfc5280#section-3.1). The proof-of-possession flow entails IoT Hub generating a random number to be signed by Company-X using its private key. If Company-X followed PKI best practices and protected their private key then only they would be in position to correctly respond to the proof-of-possession challenge. IoT Hub proceeds to register the X.509 CA certificate upon a successful response of the proof-of-possession challenge.
+### Proof-of-possession
+
+The X.509 CA certificate, just like any digital certificate, is public information that is susceptible to eavesdropping. As such, an eavesdropper may intercept a certificate and try to upload it as their own. In our example, IoT Hub has to make sure that the CA certificate Company-X uploaded really belongs to Company-X. It does so by challenging Company-X to prove that they possess the certificate through a [proof-of-possession (PoP) flow](https://tools.ietf.org/html/rfc5280#section-3.1).
+
+For the proof-of-possession flow, IoT Hub generates a random number to be signed by Company-X using its private key. If Company-X followed PKI best practices and protected their private key, then only they would be able to correctly respond to the proof-of-possession challenge. IoT Hub proceeds to register the X.509 CA certificate upon a successful response of the proof-of-possession challenge.
A successful response to the proof-of-possession challenge from IoT Hub completes the X.509 CA registration.
-## Sign Devices into a Certificate Chain of Trust
+## Sign devices into a certificate chain of trust
-IoT requires every device to possess a unique identity. These identities are in the form certificates for certificate-based authentication schemes. In our example, this means every Smart-X-Widget must possess a unique device certificate. How does Company-X setup for this in its supply chain?
+IoT requires a unique identity for every device that connects. When using certificate-based authentication, these identities are in the form of certificates. In our example, this means that every Smart-X-Widget must possess a unique device certificate.
-One way to go about this is to pre-generate certificates for Smart-X-Widgets and entrusting knowledge of corresponding unique device private keys with supply chain partners. For Company-X, this means entrusting Factory-Y and Technician-Z. While this is a valid method, it comes with challenges that must be overcome to ensure trust as follows:
+One way to provide unique certificates on each device is to pre-generate certificates for Smart-X-Widgets and to trust supply chain partners with the corresponding private keys. For Company-X, this means entrusting both Factory-Y and Technician-Z. While this is a valid method, it comes with challenges that must be overcome to ensure trust, as follows:
-1. Having to share device private keys with supply chain partners, besides ignoring PKI best practices of never sharing private keys, makes building trust in the supply chain expensive. It means capital systems like secure rooms to house device private keys, and processes like periodic security audits need to be installed. Both add cost to the supply chain.
+* Having to share device private keys with supply chain partners, besides ignoring PKI best practices of never sharing private keys, makes building trust in the supply chain expensive. It requires systems like secure rooms to house device private keys and processes like periodic security audits. Both add cost to the supply chain.
-2. Securely accounting for devices in the supply chain and later managing them in deployment becomes a one-to-one task for every key-to-device pair from the point of device unique certificate (hence private key) generation to device retirement. This precludes group management of devices unless the concept of groups is explicitly built into the process somehow. Secure accounting and device life-cycle management, therefore, becomes a heavy operations burden. In our example, Company-X would bear this burden.
+* Securely accounting for devices in the supply chain, and later managing them in deployment, becomes a one-to-one task for every key-to-device pair from the point of device unique certificate (and private key) generation to device retirement. This precludes group management of devices unless the concept of groups is explicitly built into the process somehow. Secure accounting and device life-cycle management, therefore, becomes a heavy operations burden.
-X.509 CA certificate authentication offers elegant solutions to afore listed challenges through the use of certificate chains. A certificate chain results from a CA signing an intermediate CA that in turn signs another intermediate CA and so goes on until a final intermediate CA signs a device. In our example, Company-X signs Factory-Y, which in turn signs Technician-Z that finally signs Smart-X-Widget.
+X.509 CA certificate authentication offers elegant solutions to these challenges by using certificate chains. A certificate chain results from a CA signing an intermediate CA that in turn signs another intermediate CA, and so on, until a final intermediate CA signs a device. In our example, Company-X signs Factory-Y, which in turn signs Technician-Z that finally signs Smart-X-Widget.
![Certificate chain hierarchy](./media/iot-hub-x509ca-concept/cert-chain-hierarchy.png)
-Above cascade of certificates in the chain presents the logical hand-off of authority. Many supply chains follow this logical hand-off whereby each intermediate CA gets signed into the chain while receiving all upstream CA certificates, and the last intermediate CA finally signs each device and inject all the authority certificates from the chain into the device. This is common when the contract manufacturing company with a hierarchy of factories commissions a particular factory to do the manufacturing. While the hierarchy may be several levels deep (for example, by geography/product type/manufacturing line), only the factory at the end gets to interact with the device but the chain is maintained from the top of the hierarchy.
+This cascade of certificates in the chain represents the logical hand-off of authority. Many supply chains follow this logical hand-off whereby each intermediate CA gets signed into the chain while receiving all upstream CA certificates, and the last intermediate CA finally signs each device and injects all the authority certificates from the chain into the device. This is common when the contracted manufacturing company with a hierarchy of factories commissions a particular factory to do the manufacturing. While the hierarchy may be several levels deep (for example, by geography/product type/manufacturing line), only the factory at the end gets to interact with the device but the chain is maintained from the top of the hierarchy.
-Alternate chains may have different intermediate CA interact with the device in which case the CA interacting with the device injects certificate chain content at that point. Hybrid models are also possible where only some of the CA has physical interaction with the device.
+Alternate chains may have different intermediate CAs interact with the device in which case the CA interacting with the device injects certificate chain content at that point. Hybrid models are also possible where only some of the CA has physical interaction with the device.
-In our example, both Factory-Y and Technician-Z interact with the Smart-X-Widget. While Company-X owns Smart-X-Widget, it actually does not physically interact with it in the entire supply chain. The certificate chain of trust for Smart-X-Widget therefore comprise Company-X signing Factory-Y which in turn signs Technician-Z that will then provide final signature to Smart-X-Widget. The manufacture and installation of Smart-X-Widget comprise Factory-Y and Technician-Z using their respective intermediate CA certificates to sign each and every Smart-X-Widgets. The end result of this entire process is Smart-X-Widgets with unique device certificates and certificate chain of trust going up to Company-X CA certificate.
+The following diagram shows how the certificate chain of trust comes together in our Smart-X-Widget example.
![Chain of trust from the certs of one company to the certs of another company](./media/iot-hub-x509ca-concept/cert-mfr-chain.png)
-This is a good point to review the value of the X.509 CA method. Instead of pre-generating and handing off certificates for every Smart-X-Widget into the supply chain, Company-X only had to sign Factory-Y once. Instead of having to track every device throughout the device's life-cycle, Company-X may now track and manage devices through groups that naturally emerge from the supply chain process, for example, devices installed by Technician-Z after July of some year.
+1. Company-X never physically interacts with any of the Smart-X-Widgets. It initiates the certificate chain of trust by signing Factory-Y's intermediate CA certificate.
+1. Factory-Y now has its own intermediate CA certificate and a signature from Company-X. It passes copies of these items to the device. It also uses its intermediate CA certificate to sign Technician-Z's intermediate CA certificate and the Smart-X-Widget device certificate.
+1. Technician-Z now has its own intermediate CA certificate and a signature from Factory-Y. It passes copies of these items to the device. It also uses its intermediate CA certificate to sign the Smart-X-Widget device certificate.
+1. Every Smart-X-Widget device now has its own unique device certificate and copies of the public keys and signatures from each intermediate CA certificate that it interacted with throughout the supply chain. These certificates and signatures can be traced back to the original Company-X root.
-Last but not least, the CA method of authentication infuses secure accountability into the device manufacturing supply chain. Because of the certificate chain process, the actions of every member in the chain is cryptographically recorded and verifiable.
+The CA method of authentication infuses secure accountability into the device manufacturing supply chain. Because of the certificate chain process, the actions of every member in the chain are cryptographically recorded and verifiable.
-This process relies on certain assumptions that must be surfaced for completeness. It requires independent creation of device unique public/private key pair and that the private key be protected within the device. Fortunately, secure silicon chips in the form of Hardware Secure Modules (HSM) capable of internally generating keys and protecting private keys exist. Company-X only need to add one of such chips into Smart-X-Widget's component bill of materials.
+This process relies on the assumption that the unique device public/private key pair is created independently and that the private key is protected within the device always. Fortunately, secure silicon chips exist in the form of Hardware Secure Modules (HSM) that are capable of internally generating keys and protecting private keys. Company-X only needs to add one such secure chip into Smart-X-Widget's component bill of materials.
-## Device Connection
+## Device connection
-Previous sections above have been building up to device connection. By simply registering an X.509 CA certificate to IoT Hub one time, how do potentially millions of devices connect and get authenticated from the first time? Simple; through the same certificate upload and proof-of-possession flow we earlier encountered with registering the X.509 CA certificate.
+Once the top level CA certificate is registered to IoT Hub and the devices have their unique certificates, how do they connect? By simply registering an X.509 CA certificate to IoT Hub one time, how do potentially millions of devices connect and get authenticated from the first time? Simple: through the same certificate upload and proof-of-possession flow we earlier encountered with registering the X.509 CA certificate.
-Devices manufactured for X.509 CA authentication are equipped with device unique certificates and a certificate chain from their respective manufacturing supply chain. Device connection, even for the very first time, happens in a two-step process: certificate chain upload and proof-of-possession.
+Devices manufactured for X.509 CA authentication are equipped with unique device certificates and a certificate chain from their respective manufacturing supply chain. Device connection, even for the first time, happens in a two-step process: certificate chain upload and proof-of-possession.
-During the certificate chain upload, the device uploads its device unique certificate together with the certificate chain installed within it to IoT Hub. Using the pre-registered X.509 CA certificate, IoT Hub can cryptographically validate a couple of things, that the uploaded certificate chain is internally consistent, and that the chain was originated by the valid owner of the X.509 CA certificate. Just was with the X.509 CA registration process, IoT Hub would initiate a proof-of-possession challenge-response process to ascertain that the chain and hence device certificate actually belongs to the device uploading it. It does so by generating a random challenge to be signed by the device using its private key for validation by IoT Hub. A successful response triggers IoT Hub to accept the device as authentic and grant it connection.
+During the certificate chain upload, the device uploads its unique certificate and its certificate chain to IoT Hub. Using the pre-registered X.509 CA certificate, IoT Hub validates that the uploaded certificate chain is internally consistent and that the chain was originated by the valid owner of the X.509 CA certificate. As with the X.509 CA registration process, IoT Hub uses a proof-of-possession challenge-response process to ascertain that the chain, and therefore the device certificate, belongs to the device uploading it. A successful response triggers IoT Hub to accept the device as authentic and grant it connection.
In our example, each Smart-X-Widget would upload its device unique certificate together with Factory-Y and Technician-Z X.509 CA certificates and then respond to the proof-of-possession challenge from IoT Hub. ![Flow from one cert to another, pop challenge from hub](./media/iot-hub-x509ca-concept/device-pop-flow.png)
-Notice that the foundation of trust rests in protecting private keys including device private keys. We therefore cannot stress enough the importance of secure silicon chips in the form of Hardware Secure Modules (HSM) for protecting device private keys, and the overall best practice of never sharing any private keys, like one factory entrusting another with its private key.
+The foundation of trust rests in protecting private keys, including device private keys. We therefore can't stress enough the importance of secure silicon chips in the form of Hardware Secure Modules (HSM) for protecting device private keys, and the overall best practice of never sharing any private keys, like one factory entrusting another with its private key.
+
+## Next steps
+
+Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
iot-hub Iot Hub X509ca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509ca-overview.md
Previously updated : 07/13/2021 Last updated : 07/14/2022
-# Device Authentication using X.509 CA Certificates
-This article describes how to use X.509 Certificate Authority (CA) certificates to authenticate devices connecting IoT Hub. In this article you will learn:
+# Authenticate devices using X.509 CA certificates
+
+This article describes how to use X.509 certificate authority (CA) certificates to authenticate devices connecting to IoT Hub. In this article you will learn:
* How to get an X.509 CA certificate * How to register the X.509 CA certificate to IoT Hub
This article describes how to use X.509 Certificate Authority (CA) certificates
[!INCLUDE [iot-hub-include-x509-ca-signed-support-note](../../includes/iot-hub-include-x509-ca-signed-support-note.md)]
-## Overview
-
-The X.509 CA feature enables device authentication to IoT Hub using a Certificate Authority (CA). It greatly simplifies initial device enrollment process, and supply chain logistics during device manufacturing. [Learn more in this scenario article about the value of using X.509 CA certificates](iot-hub-x509ca-concept.md) for device authentication. We encourage you to read this scenario article before proceeding as it explains why the steps that follow exist.
-
-## Prerequisite
+The X.509 CA feature enables device authentication to IoT Hub using a certificate authority (CA). It simplifies the initial device enrollment process as well as supply chain logistics during device manufacturing. If you aren't familiar with X.509 CA certificates, see [Understand how X.509 CA certificates are used in the IoT industry](iot-hub-x509ca-concept.md) for more information.
-Using the X.509 CA feature requires that you have an IoT Hub account. [Learn how to create an IoT Hub instance](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) if you don't already have one.
-
-## How to get an X.509 CA certificate
+## Get an X.509 CA certificate
The X.509 CA certificate is at the top of the chain of certificates for each of your devices. You may purchase or create one depending on how you intend to use it.
-For production environment, we recommend that you purchase an X.509 CA certificate from a public root certificate authority. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if you intend your devices to be part of an open IoT network where they are expected to interact with third-party products or services.
+For production environments, we recommend that you purchase an X.509 CA certificate from a public root certificate authority. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they will interact with third-party products or services.
You may also create a self-signed X.509 CA for experimentation or for use in closed IoT networks.
-Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected at all times. This is necessary for trust building trust in the X.509 CA authentication.
+Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected at all times. This is necessary for building trust in the X.509 CA authentication.
-Learn how to [create a self-signed CA certificate](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md), which you can use for experimentation throughout this feature description.
+Learn how to [create a self-signed CA certificate](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md), which you can use for testing.
## Sign devices into the certificate chain of trust
-The owner of an X.509 CA certificate can cryptographically sign an intermediate CA who can in turn sign another intermediate CA, and so on, until the last intermediate CA terminates this process by signing a device. The result is a cascaded chain of certificates known as a certificate chain of trust. In real life this plays out as delegation of trust towards signing devices. This delegation is important because it establishes a cryptographically variable chain of custody and avoids sharing of signing keys.
+The owner of an X.509 CA certificate can cryptographically sign an intermediate CA that can in turn sign another intermediate CA, and so on, until the last intermediate CA terminates this process by signing a device certificate. The result is a cascaded chain of certificates known as a *certificate chain of trust*. In real life this plays out as delegation of trust towards signing devices. This delegation is important because it establishes a cryptographically variable chain of custody and avoids sharing of signing keys.
![img-generic-cert-chain-of-trust](./media/generic-cert-chain-of-trust.png)
-The device certificate (also called a leaf certificate) must have the *Subject Name* set to the **Device ID** (`CN=deviceId`) that was used when registering the IoT device in the Azure IoT Hub. This setting is required for authentication.
+The device certificate (also called a leaf certificate) must have the *subject name* set to the **device ID** (`CN=deviceId`) that was used when registering the IoT device in Azure IoT Hub. This setting is required for authentication.
-Learn here how to [create a certificate chain](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) as done when signing devices.
+Learn how to [create a certificate chain](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) as done when signing devices.
-## How to register the X.509 CA certificate to IoT Hub
+## Register the X.509 CA certificate to IoT Hub
-Register your X.509 CA certificate to IoT Hub where it will be used to authenticate your devices during registration and connection. Registering the X.509 CA certificate is a two-step process that comprises certificate file upload and proof of possession.
+Register your X.509 CA certificate to IoT Hub where it will be used to authenticate your devices during registration and connection. Registering the X.509 CA certificate is a two-step process that includes uploading the certificate file and then establishing proof of possession.
The upload process entails uploading a file that contains your certificate. This file should never contain any private keys.
-The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub would like to ascertain that you really own the CA certificate. It shall do so by generating a random challenge that you must sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as earlier advised, then only you will possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, complete this step by uploading a file containing the results.
-
-Learn here how to [register your CA certificate](./tutorial-x509-scripts.md)
+The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. It does so by generating a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you will possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step by uploading a file containing the results.
-## How to create a device on IoT Hub
+Learn how to [register your CA certificate](./tutorial-x509-scripts.md)
-To preclude device impersonation, IoT Hub requires you to let it know what devices to expect. You do this by creating a device entry in the IoT Hub's device registry. This process is automated when using IoT Hub [Device Provisioning Service](../iot-dps/about-iot-dps.md).
+## Create a device on IoT Hub
-Learn here how to [manually create a device in IoT Hub](./tutorial-x509-scripts.md).
+To prevent device impersonation, IoT Hub requires that you let it know what devices to expect. You do this by creating a device entry in the IoT hub's device registry. This process is automated when using [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md).
-Create an X.509 device for your IoT hub
+Learn how to [manually create a device in IoT Hub](./tutorial-x509-scripts.md).
-## Authenticating devices signed with X.509 CA certificates
+## Authenticate devices signed with X.509 CA certificates
-With X.509 CA certificate registered and devices signed into a certificate chain of trust, what remains is device authentication when the device connects, even for the first time. When an X.509 CA signed device connects, it uploads its certificate chain for validation. The chain includes all intermediate CA and device certificates. With this information, IoT Hub authenticates the device in a two-step process. IoT Hub cryptographically validates the certificate chain for internal consistency, and then issues a proof-of-possession challenge to the device. IoT Hub declares the device authentic on a successful proof-of-possession response from the device. This declaration assumes that the device's private key is protected and that only the device can successfully respond to this challenge. We recommend use of secure chips like Hardware Secure Modules (HSM) in devices to protect private keys.
+With your X.509 CA certificate registered and devices signed into a certificate chain of trust, the final step is device authentication when the device connects. When an X.509 CA-signed device connects, it uploads its certificate chain for validation. The chain includes all intermediate CA and device certificates. With this information, IoT Hub authenticates the device in a two-step process. IoT Hub cryptographically validates the certificate chain for internal consistency, and then issues a proof-of-possession challenge to the device. IoT Hub declares the device authentic on a successful proof-of-possession response from the device. This declaration assumes that the device's private key is protected and that only the device can successfully respond to this challenge. We recommend using secure chips like Hardware Secure Modules (HSM) in devices to protect private keys.
-A successful device connection to IoT Hub completes the authentication process and is also indicative of a proper setup. Every time a device connects, IoT Hub renegotiates the TLS session and verifies the deviceΓÇÖs X.509 certificate.
+A successful device connection to IoT Hub completes the authentication process and is also indicative of a proper setup. Every time a device connects, IoT Hub renegotiates the TLS session and verifies the deviceΓÇÖs X.509 certificate.
-Learn here how to [complete this device connection step](./tutorial-x509-scripts.md).
+Learn how to [complete this device connection step](./tutorial-x509-scripts.md).
## Next Steps Learn about [the value of X.509 CA authentication](iot-hub-x509ca-concept.md) in IoT.
-Get started with IoT Hub [Device Provisioning Service](../iot-dps/index.yml).
+Get started with [IoT Hub Device Provisioning Service](../iot-dps/index.yml).
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
By default, dependencies are layered on top of base images provided by Azure ML
Associated to your Azure Machine Learning workspace is an Azure Container Registry instance that's used as a cache for container images. Any image materialized, is pushed to the container registry, and used if experimentation or deployment is triggered for the corresponding environment. Azure Machine Learning doesn't delete any image from your container registry, and it's your responsibility to evaluate the need of an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](../defender-for-cloud/defender-for-container-registries-usage.md) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate responses to Microsoft Defender for Cloud triggers](../defender-for-cloud/workflow-automation.md).
+## Using a private package repository
+
+Azure Machine Learning uses Conda for package installations. By default, packages are downloaded from public repositories. In case your organization requires packages to be sourced only from private repositories, you may override the conda configuration as part of your base image. Below example configuration shows how to remove the default channels, and add your own private conda feed.
+
+```dockerfile
+RUN conda config --set offline false \
+&& conda config --remove channels defaults || true \
+&& conda config --add channels https://my.private.conda.feed/conda/feed
+```
+
+See [use your own dockerfile](how-to-use-environments.md#use-your-own-dockerfile) to learn how to specify your own base images in Azure Machine Learning. For more details on configuring Conda environments, see [Conda - Creating an environment file manually](https://docs.conda.io/projects/conda/en/4.6.1/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually).
+ ## Vulnerability management on compute hosts Managed compute nodes in Azure Machine Learning make use of Microsoft-managed OS VM images and pull the latest updated VM image at the time that a node gets provisioned. This applies to compute instance, compute cluster, and managed inference compute SKUs.
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
To run the `score.py` provided as part of the deployment, Azure creates a contai
### ERROR: ResourceNotFound
-This error occurs when Azure Resource Manager can't find a required resource. For example, you will receive this error if a storage account was referred to but cannot be found at the path on which it was specified. Be sure to double check resources that might have been supplied by exact path or the spelling of their names.
+Below is a list of reasons you might run into this error:
+
+* [Azure Resource Manager cannot find a required resource](#resource-manager-cannot-find-a-resource)
+* [Azure Container Registry is private or otherwise inaccessible](#container-registry-authorization-error)
+
+#### Resource Manager cannot find a resource
+
+This error occurs when Azure Resource Manager can't find a required resource. For example, you will receive this error if a storage account was referred to but is not able to be found at the specified path. Be sure to double-check the spelling of exact paths or resource names.
+
+For more information, see [Resolve Resource Not Found Errors](../azure-resource-manager/troubleshooting/error-not-found.md).
+
+#### Container registry authorization error
+
+This error occurs when an image belonging to a private or otherwise inaccessible container registry was supplied for deployment.
+At this time, our APIs cannot accept private registry credentials.
+
+To mitigate this error, either ensure that the container registry is **not private** or follow the following steps:
+1. Grant your private registry's `acrPull` role to the system identity of your online enpdoint.
+1. In your environment definition, specify the address of your private image as well as the additional instruction to not modify (build) the image.
+
+If the mitigation is successful, the image will not require any building and the final image address will simply be the given image address.
+At deployment time, your online endpoint's system identity will pull the image from the private registry.
-For more information, see [Resolve resource not found errors](../azure-resource-manager/troubleshooting/error-not-found.md).
+For more diagnostic information, see [How To Use the Workspace Diagnostic API](../machine-learning/how-to-workspace-diagnostic-api.md).
### ERROR: OperationCancelled
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
Before we dive in the code, you'll need to connect to your Azure ML workspace. T
In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find your Subscription ID: 1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
-1. At the bottom, select **View all properties in Azure portal**
-1. Copy the value from Azure portal into the code.
+1. You'll see the values you need for **<SUBSCRIPTION_ID>**, **<RESOURCE_GROUP>**, and **<AML_WORKSPACE_NAME>**.
+1. Copy a value, then close the window and paste that into your code. Open the tool again to get the next value.
:::image type="content" source="media/tutorial-pipeline-python-sdk/find-info.png" alt-text="Screenshot shows how to find values needed for your code.":::
mysql Quickstart Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-terraform.md
Previously updated : 8/23/2022 Last updated : 8/28/2022 # Quickstart: Use Terraform to create an Azure Database for MySQL - Flexible Server
Last updated 8/23/2022
Article tested with the following Terraform and Terraform provider versions: -- [Terraform v1.2.1](https://releases.hashicorp.com/terraform/)-- [AzureRM Provider v.2.99.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
+- [Terraform v1.2.7](https://releases.hashicorp.com/terraform/)
+- [AzureRM Provider v.3.20.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
[!INCLUDE [About Azure Database for MySQL - Flexible Server](../includes/azure-database-for-mysql-flexible-server-abstract.md)]
purview How To Create Import Export Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-import-export-glossary.md
To create a new glossary term, follow these steps:
3. Give your new term a name, which must be unique in the catalog. The term name is case-sensitive, meaning you could have a term called **Sample** and **sample** in the catalog. 4. Add a **Definition**.-
+
+### Adding rich text to a definition
+
+Microsoft Purview enables users to add rich formatting to term definitions such as adding bolding, underlining, or italicizing text. Users can also create tables, bulleted lists, or hyperlinks to external resources.
++
+Below are the rich text formatting options:
+
+| Name | Description | Shortcut key |
+| - | -- | |
+| Bold | Make your text bold. Adding the '*' character around text will also bold it. | Ctrl+B |
+| Italic | Italicize your text. Adding the '_' character around text will also italicize it. | Ctrl+I |
+| Underline | Underline your text. | Ctrl+U |
+| Bullets | Create a bulleted list. Adding the '-' character before text will also create a bulleted list. | |
+| Numbering | Create a numbered list Adding the '1' character before text will also create a bulleted list. | |
+| Heading | Add a formatted heading | |
+| Font size | Change the size of your text. The default size is 12. | |
+| Decrease indent | Move your paragraph closer to the margin. | |
+| Increase indent | Move your paragraph farther away from the margin. | |
+| Add hyperlink | Create a link in your document for quick access to web pages and files. | |
+| Remove hyperlink | Change a link to plain text. | |
+| Quote | Add quote text | |
+| Add table | Add a table to your content. | |
+| Edit table | Insert or delete a column or row from a table | |
+| Clear formatting | Remove all formatting from a selection of text, leaving only the normal, unformatted text. | |
+| Undo | Undo changes you made to the content. | Ctrl+Z |
+| Redo | Redo changes you made to the content. | Ctrl+Y |
+
+> [!NOTE]
+> Updating a definition with the rich text editor adds a new additional attribute `microsoft_isDescriptionRichText": "true"` in the term payload. This attribute is not visible on the UX and is automatically populated when any rich text action is taken by user. See the snippet of term JSON message with rich text definition populated below.
+
+>```json
+> {
+> "additionalAttributes": {
+> "microsoft_isDescriptionRichText": "true"
+> }
+> }
+>```
+
5. Set the **Status** for the term. New terms default to **Draft** status. :::image type="content" source="media/how-to-create-import-export-glossary/overview-tab.png" alt-text="Screenshot of the status choices.":::
To create a new glossary term, follow these steps:
- **Draft**: This term isn't yet officially implemented. - **Approved**: This term is official/standard/approved. - **Expired**: This term should no longer be used.
- - **Alert**: This term needs attention.
+ - **Alert**: This term needs attention.
+ > [!Important]
+ > if an approval workflow is enabled on the term hierarchy then when a new term is created it will go through the approval process and only when it is approved it is stored in catalog. See here to learn about how to manage approval workflows for business glossary [Approval workflows for business glossary](how-to-workflow-business-terms-approval.md)
+
6. Add **Resources** and **Acronym**. If the term is part of hierarchy, you can add parent terms at **Parent** in the overview tab. 7. Add **Synonyms** and **Related terms** in the related tab.
To create a new glossary term, follow these steps:
9. Select **Create** to create your term.
+ > [!Important]
+ > if an approval workflow is enabled on term hierarchy path, you will see **Submit for approval** instead of create button. Clicking on submit for approval will trigger the approval workflow for this term.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/submit-for-approval.png" alt-text="Screenshot of submit for approval." border="true":::
+ ## Import terms into the glossary The Microsoft Purview Data Catalog provides a template .csv file for you to import your terms into your Glossary.
Notice that term names are case-sensitive. For example, `Sample` and `saMple` co
> [!Important] > The system only supports importing columns that are available in the template. The "System Default" template will have all the default attributes. > However, custom term templates will have out of the box attributes and additional custom attributes defined in the template. Therefore, the .CSV file differs both from total number of columns and column names depending on the term template selected. You can also review the file for issues after upload.
+ > if you want to upload a file with rich text definition, make sure to enter the definition with markup tags and populate the column **IsDefinitionRichText** to true in the .csv file.
:::image type="content" source="media/how-to-create-import-export-glossary/select-file-for-import.png" alt-text="Screenshot of the Glossary terms page, select file for Import.":::
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
This guide will show you how to create and manage self-service access workflows
The template has the following steps: 1. Trigger when a data access request is made. 1. Approval connector that specifies a user or group that will be contacted to approve the request.
+
+ ### Assign Data owners as approvers
+ Using the dynamic variable **Asset.Owner** as approvers in Approval connector will send approval requests to the data owners on the entity.
+
+ >[!Note]
+ > Since entities may not have data owner field populated, using the above variables might result in errors if no data owner is found.
+
1. Condition to check approval status - If approved: 1. Condition to check if data source is registered for [data use management](how-to-enable-data-use-governance.md) (policy)
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
The following steps reset the connector and reingest SAP logs from the last 30 m
1. Delete the **metadata.db** file from the **sapcon/[SID]** directory. Run: ```bash
- cd ~/sapcon/<SID>
- ls
- mv metadata.db metadata.old
+ cd /opt/sapcon/<SID>
+ rm metadata.db
``` > [!NOTE]
spring-apps How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-config-server.md
Azure Spring Apps supports Azure DevOps, GitHub, GitLab, and Bitbucket for stori
Additionally, some configurable properties are available only for certain types. The following subsections list the properties for each repository type.
+> [!NOTE]
+> Config Server takes `master` (on Git) as the default label if you don't specify one. However, GitHub has recently changed the default branch from `master` to `main`. To avoid Azure Spring Apps Config Server failure, be sure to pay attention to the default label when setting up Config Server with GitHub, especially for newly-created repositories.
+ ### Public repository When you use a public repository, your configurable properties are more limited.
All configurable properties used to set up private Git repository with SSH are l
| `strict-host-key-checking` | No | Indicates whether the Config Server instance will fail to start when using the private `host-key`. Should be *true* (default value) or *false*. | > [!NOTE]
-> Config Server takes `master` (om Git itself) as the default label if you don't specify one. But GitHub has changed the default branch from `master` to `main` recently. To avoid Azure Spring Apps Config Server failure, be sure to pay attention to the default label when setting up Config Server with GitHub, especially for newly-created repositories.
+> Config Server doesn't support SHA-2 signatures yet and we are actively working on to support it in future release. Before that, please use SHA-1 signatures or basic auth instead.
### Private repository with basic authentication
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md
The following image shows the three types of repository authentication supported
| Host key algorithm | No | The algorithm for `hostKey`: one of `ssh-dss`, `ssh-rsa`, `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, and `ecdsa-sha2-nistp521`. (Required if supplying `Host key`). | | Strict host key checking | No | Optional value that indicates whether the backend should be ignored if it encounters an error when using the provided `Host key`. Valid values are `true` and `false`. The default value is `true`. |
+> [!NOTE]
+> Application Configuration Service for Tanzu doesn't support SHA-2 signatures yet and we are actively working on to support it in future release. Before that, please use SHA-1 signatures or basic auth instead.
+ To validate access to the target URI, select **Validate**. After validation completes successfully, select **Apply** to update the configuration settings. ## Refresh strategies
spring-apps How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-tls-certificate.md
After you grant access to your key vault, you can import your certificate using
1. Select your Key Vault in **Key vault** and the certificate in **Certificate**, then **Select** and **Apply**. 1. When you have successfully imported your certificate, you'll see it in the list of Public Key Certificates.
+> [!NOTE]
+> The Azure Key Vault and Azure Spring Apps instances should be in the same tenant.
+ ### Import a local certificate file You can import a certificate file stored locally using these steps:
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Title: Multimedia redirection on Azure Virtual Desktop - Azure
description: How to use multimedia redirection for Azure Virtual Desktop (preview). Previously updated : 04/29/2022 Last updated : 08/27/2022
to do these things:
To learn more about the Insiders program, see [Windows Desktop client for admins](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-admin#configure-user-groups).
-4. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4QWrF) to install both the host native component and the multimedia redirection extensions for your internet browser on your Azure VM.
+4. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE55eRq) to install both the host native component and the multimedia redirection extensions for your internet browser on your Azure VM.
## Managing group policies for the multimedia redirection browser extension
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
Network Policies provides micro-segmentation for pods just like Network Security
![Kubernetes network policies overview](./media/kubernetes-network-policies/kubernetes-network-policies-overview.png)
-Azure NPM implementation works in conjunction with the Azure CNI that provides VNet integration for containers. NPM is supported only on Linux and Windows Server 2022 today. The implementation enforces traffic filtering by configuring allow and deny IP rules in Linux IPTables or Windows HNS ACLPolicies based on the defined policies. These rules are grouped together using Linux IPSets or Windows HNS SetPolicies.
+Azure NPM implementation works in conjunction with the Azure CNI that provides VNet integration for containers. NPM is supported only on Linux today. The implementation enforces traffic filtering by configuring allow and deny IP rules in Linux IPTables based on the defined policies. These rules are grouped together using Linux IPSets.
## Planning security for your Kubernetes cluster When implementing security for your cluster, use network security groups (NSGs) to filter traffic entering and leaving your cluster subnet (North-South traffic). Use Azure NPM for traffic between pods in your cluster (East-West traffic).
web-application-firewall Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/resource-manager-template-samples.md
+
+ Title: Azure Resource Manager templates for Azure Front Door and Web Application Firewall
+description: Azure Resource Manager templates for Azure Front Door Web Application Firewall
++++ Last updated : 08/16/2022+
+zone_pivot_groups: front-door-tiers
+
+# Azure Resource Manager templates for Azure Front Door and Web Application Firewall
+
+The following table includes links to Azure Resource Manager templates for Azure Front Door and Web Application Firewall.
++
+| Template | Description |
+| -- | -- |
+| [Front Door with Web Application Firewall and managed rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-waf-managed/) | Creates a Front Door profile and WAF with managed rule set. |
+| [Front Door with Web Application Firewall and custom rule](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-waf-custom/) | Creates a Front Door profile and WAF with custom rule. |
+| [Front Door with Web Application Firewall and rate limit](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rate-limit/) | Creates a Front Door profile and WAF with a custom rule to perform rate limiting. |
+| [Front Door with Web Application Firewall and geo-filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-geo-filtering/) | Creates a Front Door profile and WAF with a custom rule to perform geo-filtering. |
+++
+| Template | Description |
+| | |
+| [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
+| [Configure Front Door for client IP allowlisting or blocklisting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-clientip)| Configures a Front Door to restrict traffic certain client IPs using custom access control using client IPs. |
+| [Configure Front Door to take action with specific http parameters](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-http-params)| Configures a Front Door to allow or block certain traffic based on the http parameters in the incoming request by using custom rules for access control using http parameters. |
+| [Configure Front Door rate limiting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-rate-limiting)| Configures a Front Door to rate limit incoming traffic for a given frontend host. |
+
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Previously updated : 06/15/2022 Last updated : 08/28/2022 # Web Application Firewall DRS rule groups and rules Azure Front Door web application firewall (WAF) protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures. Default rule set also includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction. - ## Default rule sets
-Azure-managed Default Rule Set includes rules against the following threat categories:
+The Azure-managed Default Rule Set (DRS) includes rules against the following threat categories:
- Cross-site scripting - Java attacks
Azure-managed Default Rule Set includes rules against the following threat categ
- SQL injection protection - Protocol attackers
-The version number of the Default Rule Set increments when new attack signatures are added to the rule set.
-Default Rule Set is enabled by default in Detection mode in your WAF policies. You can disable or enable individual rules within the Default Rule Set to meet your application requirements. You can also set specific actions (ALLOW/BLOCK/REDIRECT/LOG) per rule.
+The version number of the DRS increments when new attack signatures are added to the rule set.
+
+DRS is enabled by default in Detection mode in your WAF policies. You can disable or enable individual rules within the Default Rule Set to meet your application requirements. You can also set specific actions per rule. The available actions are: [Allow, Block, Log, and Redirect](afds-overview.md#waf-actions).
-Sometimes you may need to omit certain request attributes from a WAF evaluation. A common example is Active Directory-inserted tokens that are used for authentication. You may configure an exclusion list for a managed rule, rule group, or for the entire rule set.
+Sometimes you might need to omit certain request attributes from a WAF evaluation. A common example is Active Directory-inserted tokens that are used for authentication. You may configure an exclusion list for a managed rule, rule group, or for the entire rule set. For more information, see [Web Application Firewall (WAF) with Front Door exclusion lists](./waf-front-door-exclusion.md).
-The Default action is to BLOCK. Additionally, custom rules can be configured in the same WAF policy if you wish to bypass any of the pre-configured rules in the Default Rule Set.
+By default, DRS blocks requests that trigger the rules. Additionally, custom rules can be configured in the same WAF policy if you wish to bypass any of the pre-configured rules in the Default Rule Set.
Custom rules are always applied before rules in the Default Rule Set are evaluated. If a request matches a custom rule, the corresponding rule action is applied. The request is either blocked or passed through to the back-end. No other custom rules or the rules in the Default Rule Set are processed. You can also remove the Default Rule Set from your WAF policies. ### Microsoft Threat Intelligence Collection rules
-The Microsoft Threat Intelligence Collection rules are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
+The Microsoft Threat Intelligence Collection rules are written in partnership with the Microsoft Threat Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
-### Anomaly Scoring mode
+### <a name="anomaly-scoring-mode"></a>Anomaly scoring
-OWASP has two modes for deciding whether to block traffic: Traditional mode and Anomaly Scoring mode.
+When you use DRS 2.0 or later, your WAF uses *anomaly scoring*. Traffic that matches any rule isn't immediately blocked, even when your WAF is in prevention mode. Instead, the OWASP rule sets define a severity for each rule: *Critical*, *Error*, *Warning*, or *Notice*. The severity affects a numeric value for the request, which is called the *anomaly score*:
-In Traditional mode, traffic that matches any rule is considered independently of any other rule matches. This mode is easy to understand. But the lack of information about how many rules match a specific request is a limitation. So, Anomaly Scoring mode was introduced. It's the default for OWASP 3.*x*.
+| Rule severity | Values contributes to anomaly score |
+|-|-|
+| Critical | 5 |
+| Error | 4 |
+| Warning | 3 |
+| Notice | 2 |
-In Anomaly Scoring mode, traffic that matches any rule isn't immediately blocked when the firewall is in Prevention mode. Rules have a certain severity: *Critical*, *Error*, *Warning*, or *Notice*. That severity affects a numeric value for the request, which is called the Anomaly Score. For example, one *Warning* rule match contributes 3 to the score. One *Critical* rule match contributes 5.
+If the anomaly score is 5 or greater, WAF blocks the request.
-|Severity |Value |
-|||
-|Critical |5|
-|Error |4|
-|Warning |3|
-|Notice |2|
+For example, a single *Critical* rule match is enough for the WAF to block a request, because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic.
-There's a threshold of 5 for the Anomaly Score to block traffic. So, a single *Critical* rule match is enough for the WAF to block a request, even in Prevention mode. But one *Warning* rule match only increases the Anomaly Score by 3, which isn't enough by itself to block the traffic. For more information, see [What content types does WAF support?](waf-faq.yml#what-content-types-does-waf-support-) in the FAQ to learn what content types are supported for body inspection with different DRS versions.
+When your WAF uses older version of the default rule set (before DRS 2.0), your WAF runs in the traditional mode. Traffic that matches any rule is considered independently of any other rule matches. In traditional mode, you don't have visiblity into the complete set of rules that a specific request matched.
+The version of the DRS that you use also determines which content types are supported for request body inspection. For more information, see [What content types does WAF support?](waf-faq.yml#what-content-types-does-waf-support-) in the FAQ.
### DRS 2.0
-DRS 2.0 includes 17 rule groups, as shown in the following table. Each group contains multiple rules, which can be disabled.
+DRS 2.0 includes 17 rule groups, as shown in the following table. Each group contains multiple rules, and you can disable individual rules as well as entire rule groups.
> [!NOTE] > DRS 2.0 is only available on Azure Front Door Premium.
DRS 2.0 includes 17 rule groups, as shown in the following table. Each group con
|**[MS-ThreatIntel-WebShells](#drs9905-10)**|Protect against Web shell attacks| |**[MS-ThreatIntel-CVEs](#drs99001-10)**|Protect against CVE attacks| --- ### Bot rules |Rule group|Description|
DRS 2.0 includes 17 rule groups, as shown in the following table. Each group con
|**[GoodBots](#bot200)**|Identify good bots| |**[UnknownBots](#bot300)**|Identify unknown bots| --
-The following rule groups and rules are available when using Web Application Firewall on Azure
-Front Door.
+The following rule groups and rules are available when using Web Application Firewall on Azure Front Door.
# [DRS 2.0](#tab/drs20)
Front Door.
>[!NOTE] > This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. - ### <a name="drs942-20"></a> SQLI - SQL Injection |RuleId|Description| |||
Front Door.
|942500|MySQL in-line comment detected.| |942510|SQLi bypass attempt by ticks or backticks detected.| - ### <a name="drs943-20"></a> SESSION-FIXATION |RuleId|Description| |||
Front Door.
|99001015|Attempted Spring Framework unsafe class object exploitation [CVE-2022-22965](https://www.cve.org/CVERecord?id=CVE-2022-22965)| |99001016|Attempted Spring Cloud Gateway Actuator injection [CVE-2022-22947](https://www.cve.org/CVERecord?id=CVE-2022-22947)
+> [!NOTE]
+> When reviewing your WAF's logs, you might see rule ID 949110. The description of the rule might include *Inbound Anomaly Score Exceeded*.
+>
+> This rule indicates that the total anomaly score for the request exceeded the maximum allowable score. For more information, see [Anomaly scoring](#anomaly-scoring-mode).
+>
+> When you tune your WAF policies, you need to investigate the other rules that were triggered by the request so that you can adjust your WAF's configuration. For more information, see [Tuning Web Application Firewall (WAF) for Azure Front Door](waf-front-door-tuning.md).
+ # [DRS 1.1](#tab/drs11) ## <a name="drs11"></a> 1.1 rule sets
web-application-firewall Waf Front Door Exclusion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion.md
-# Web Application Firewall (WAF) with Front Door Service exclusion lists
+# Web Application Firewall (WAF) with Front Door exclusion lists
Sometimes Web Application Firewall (WAF) might block a request that you want to allow for your application. WAF exclusion lists allow you to omit certain request attributes from a WAF evaluation. The rest of the request is evaluated as normal.
web-application-firewall Waf Front Door Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-tuning.md
Previously updated : 12/11/2020 Last updated : 08/21/2022
Another way to view request and response headers is to look inside the developer
If the request contains cookies, the Cookies tab can be selected to view them in Fiddler. Cookie information can also be used to create exclusions or custom rules in WAF.
+## Anomaly scoring rule
+
+If you see rule ID 949110 during the process of tuning your WAF, this indicates that the request was blocked by the [anomaly scoring](waf-front-door-drs.md#anomaly-scoring-mode) process.
+
+Review the other WAF log entries for the same request, by searching for the log entries with the same tracking reference. Look at each of the rules that were triggered, and tune each rule by following the guidance throughout this article.
+ ## Next steps - Learn about [Azure web application firewall](../overview.md).
web-application-firewall Create Custom Waf Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/create-custom-waf-rules.md
Previously updated : 11/20/2020 Last updated : 08/22/2022
$rule = New-AzApplicationGatewayFirewallCustomRule `
And here is the corresponding JSON: ```json
- {
- "customRules": [
- {
- "name": "blockEvilBot",
- "ruleType": "MatchRule",
- "priority": 2,
- "action": "Block",
- "matchConditions": [
- {
- "matchVariable": "RequestHeaders",
- "operator": "User-Agent",
- "matchValues": [
- "evilbot"
- ]
- }
- ]
- }
- ]
- }
+{
+ "customRules": [
+ {
+ "name": "blockEvilBot",
+ "priority": 2,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RequestHeaders",
+ "selector": "User-Agent"
+ }
+ ],
+ "operator": "Contains",
+ "negationConditon": false,
+ "matchValues": [
+ "evilbot"
+ ],
+ "transforms": [
+ "Lowercase"
+ ]
+ }
+ ]
+ }
+ ]
+}
``` To see a WAF deployed using this custom rule, see [Configure a Web Application Firewall custom rule using Azure PowerShell](configure-waf-custom-rules.md).
$rule = New-AzApplicationGatewayFirewallCustomRule `
And the corresponding JSON: ```json
- {
- "customRules": [
- {
- "name": "blockEvilBot",
- "ruleType": "MatchRule",
- "priority": 2,
- "action": "Block",
- "matchConditions": [
- {
- "matchVariable": "RequestHeaders",
- "operator": "User-Agent",
- "matchValues": [
- "evilbot"
- ]
- }
- ]
- }
- ]
- }
+{
+ "customRules": [
+ {
+ "name": "blockEvilBot",
+ "priority": 2,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RequestHeaders",
+ "selector": "User-Agent"
+ }
+ ],
+ "operator": "Regex",
+ "negationConditon": false,
+ "matchValues": [
+ "evilbot"
+ ],
+ "transforms": [
+ "Lowercase"
+ ]
+ }
+ ]
+ }
+ ]
+}
``` ## Example 2
$rule = New-AzApplicationGatewayFirewallCustomRule `
And the corresponding JSON: ```json
- {
- "customRules": [
- {
- "name": "allowUS",
- "ruleType": "MatchRule",
- "priority": 2,
- "action": "Block",
- "matchConditions": [
- {
- "matchVariable": "RemoteAddr",
- "operator": "GeoMatch",
- "NegationConditon": false,
- "matchValues": [
- "US"
- ]
- }
- ]
- }
- ]
- }
+{
+ "customRules": [
+ {
+ "name": "allowUS",
+ "priority": 2,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RemoteAddr"
+ }
+ ],
+ "operator": "GeoMatch",
+ "negationConditon": true,
+ "matchValues": [
+ "US"
+ ],
+ "transforms": [
+ "Lowercase"
+ ]
+ }
+ ]
+ }
+ ]
+}
``` -- ## Example 3 You want to block all requests from IP addresses in the range 198.168.5.0/24.
$rule = New-AzApplicationGatewayFirewallCustomRule `
Here's the corresponding JSON: ```json
- {
- "customRules": [
- {
- "name": "myrule1",
- "ruleType": "MatchRule",
- "priority": 10,
- "action": "Block",
- "matchConditions": [
- {
- "matchVariable": "RemoteAddr",
- "operator": "IPMatch",
- "matchValues": [
- "192.168.5.0/24"
- ]
- }
- ]
- }
- ]
- }
+{
+ "customRules": [
+ {
+ "name": "myrule1",
+ "priority": 10,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RemoteAddr"
+ }
+ ],
+ "operator": "IPMatch",
+ "negationConditon": false,
+ "matchValues": [
+ "192.168.5.0/24"
+ ],
+ "transforms": []
+ }
+ ]
+ }
+ ]
+}
``` Corresponding CRS rule:
$condition2 = New-AzApplicationGatewayFirewallCondition `
Here's the corresponding JSON: ```json
-{
-
- "customRules": [
- {
- "name": "myrule",
- "ruleType": "MatchRule",
- "priority": 10,
- "action": "block",
- "matchConditions": [
- {
- "matchVariable": "RemoteAddr",
- "operator": "IPMatch",
- "negateCondition": false,
- "matchValues": [
- "192.168.5.0/24"
- ]
- },
- {
- "matchVariable": "RequestHeaders",
- "selector": "User-Agent",
- "operator": "Contains",
- "transforms": [
- "Lowercase"
- ],
- "matchValues": [
- "evilbot"
- ]
- }
- ]
- }
- ]
- }
+{
+ "customRules": [
+ {
+ "name": "myrule",
+ "priority": 10,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RemoteAddr"
+ }
+ ],
+ "operator": "IPMatch",
+ "negationConditon": false,
+ "matchValues": [
+ "192.168.5.0/24"
+ ],
+ "transforms": []
+ },
+ {
+ "matchVariables": [
+ {
+ "variableName": "RequestHeaders",
+ "selector": "User-Agent"
+ }
+ ],
+ "operator": "Contains",
+ "negationConditon": false,
+ "matchValues": [
+ "evilbot"
+ ],
+ "transforms": [
+ "Lowercase"
+ ]
+ }
+ ]
+ }
+ ]
+}
``` ## Example 5
And the corresponding JSON:
```json {
- "customRules": [
- {
- "name": "myrule1",
- "ruleType": "MatchRule",
- "priority": 10,
- "action": "block",
- "matchConditions": [
- {
- "matchVariable": "RequestHeaders",
- "operator": "IPMatch",
- "negateCondition": true,
- "matchValues": [
- "192.168.5.0/24"
- ]
- }
- ]
- },
- {
- "name": "myrule2",
- "ruleType": "MatchRule",
- "priority": 20,
- "action": "block",
- "matchConditions": [
- {
- "matchVariable": "RequestHeaders",
- "selector": "User-Agent",
- "operator": "Contains",
- "negateCondition": true,
- "transforms": [
- "Lowercase"
- ],
- "matchValues": [
- "chrome"
- ]
- }
- ]
- }
- ]
- }
+ "customRules": [
+ {
+ "name": "myrule1",
+ "priority": 10,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RemoteAddr"
+ }
+ ],
+ "operator": "IPMatch",
+ "negationConditon": true,
+ "matchValues": [
+ "192.168.5.0/24"
+ ],
+ "transforms": []
+ }
+ ]
+ },
+ {
+ "name": "myrule2",
+ "priority": 20,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RequestHeaders",
+ "selector": "User-Agent"
+ }
+ ],
+ "operator": "Contains",
+ "negationConditon": true,
+ "matchValues": [
+ "chrome"
+ ],
+ "transforms": [
+ "Lowercase"
+ ]
+ }
+ ]
+ }
+ ]
+}
``` ## Example 6
-You want to block custom SQLI. Since the logic used here is **or**, and all the values are in the *RequestUri*, all of the *MatchValues* can be in a comma-separated list.
-
-Logic: p **or** q **or** r
+You want to only allow requests from specific known user agents.
-```azurepowershell
-$variable1 = New-AzApplicationGatewayFirewallMatchVariable `
- -VariableName RequestUri
-$condition1 = New-AzApplicationGatewayFirewallCondition `
- -MatchVariable $variable1 `
- -Operator Contains `
- -MatchValue "1=1", "drop tables", "'ΓÇö" `
- -NegationCondition $False
+Because the logic used here is **or**, and all the values are in the *User-Agent* header, all of the *MatchValues* can be in a comma-separated list.
-$rule1 = New-AzApplicationGatewayFirewallCustomRule `
- -Name myrule4 `
- -Priority 10 `
- -RuleType MatchRule `
- -MatchCondition $condition1 `
- -Action Block
-```
-
-Corresponding JSON:
-
-```json
- {
- "customRules": [
- {
- "name": "myrule4",
- "ruleType": "MatchRule",
- ΓÇ£priorityΓÇ¥: 10
- "action": "block",
- "matchConditions": [
- {
- "matchVariable": "RequestUri",
- "operator": "Contains",
- "matchValues": [
- "1=1",
- "drop tables",
- "'--"
- ]
- }
- ]
- }
- ]
- }
-```
-
-Alternative Azure PowerShell:
+Logic: p **or** q **or** r
```azurepowershell
-$variable1 = New-AzApplicationGatewayFirewallMatchVariable `
- -VariableName RequestUri
-$condition1 = New-AzApplicationGatewayFirewallCondition `
- -MatchVariable $variable1 `
- -Operator Contains `
- -MatchValue "1=1" `
- -NegationCondition $False
-
-$rule1 = New-AzApplicationGatewayFirewallCustomRule `
- -Name myrule1 `
- -Priority 10 `
- -RuleType MatchRule `
- -MatchCondition $condition1 `
--Action Block-
-$variable2 = New-AzApplicationGatewayFirewallMatchVariable `
- -VariableName RequestUri
-
-$condition2 = New-AzApplicationGatewayFirewallCondition `
- -MatchVariable $variable2 `
- -Operator Contains `
- -MatchValue "drop tables" `
- -NegationCondition $False
-
-$rule2 = New-AzApplicationGatewayFirewallCustomRule `
- -Name myrule2 `
- -Priority 20 `
- -RuleType MatchRule `
- -MatchCondition $condition2 `
- -Action Block
-
-$variable3 = New-AzApplicationGatewayFirewallMatchVariable `
- -VariableName RequestUri
-
-$condition3 = New-AzApplicationGatewayFirewallCondition `
- -MatchVariable $variable3 `
- -Operator Contains `
- -MatchValue "ΓÇÖΓÇö" `
- -NegationCondition $False
+$variable = New-AzApplicationGatewayFirewallMatchVariable `
+ -VariableName RequestHeaders `
+ -Selector User-Agent
+$condition = New-AzApplicationGatewayFirewallCondition `
+ -MatchVariable $variable `
+ -Operator Equal `
+ -MatchValue @('user1', 'user2') `
+ -NegationCondition $True
-$rule3 = New-AzApplicationGatewayFirewallCustomRule `
- -Name myrule3 `
- -Priority 30 `
+$rule = New-AzApplicationGatewayFirewallCustomRule `
+ -Name BlockUnknownUserAgents `
+ -Priority 2 `
-RuleType MatchRule `
- -MatchCondition $condition3 `
+ -MatchCondition $condition `
-Action Block ``` Corresponding JSON: ```json
- {
- "customRules": [
- {
- "name": "myrule1",
- "ruleType": "MatchRule",
- "priority": 10,
- "action": "block",
- "matchConditions": [
- {
- "matchVariable": "RequestUri",
- "operator": "Contains",
- "matchValues": [
- "1=1"
- ]
- }
- ]
- },
- {
- "name": "myrule2",
- "ruleType": "MatchRule",
- "priority": 20,
- "action": "block",
- "matchConditions": [
- {
- "matchVariable": "RequestUri",
- "operator": "Contains",
- "transforms": [
- "Lowercase"
- ],
- "matchValues": [
- "drop tables"
- ]
- }
- ]
- },
- {
- "name": "myrule3",
- "ruleType": "MatchRule",
- "priority": 30,
- "action": "block",
- "matchConditions": [
- {
- "matchVariable": "RequestUri",
- "operator": "Contains",
- "matchValues": [
- "'--"
- ]
- }
- ]
- }
- ]
- }
+{
+ "customRules": [
+ {
+ "name": "BlockUnknownUserAgents",
+ "priority": 2,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RequestHeaders",
+ "selector": "User-Agent"
+ }
+ ],
+ "operator": "Equal",
+ "negationConditon": true,
+ "matchValues": [
+ "user1",
+ "user2"
+ ],
+ "transforms": []
+ }
+ ]
+ }
+ ]
+}
``` ## Example 7
-It is not uncommon to see Azure Front Door deployed in front of Application Gateway. In order to make sure the traffic received by Application Gateway comes from the Front Door deployment, the best practice is to check if the `X-Azure-FDID` header contains the expected unique value. For more information on this, please see [How to lock down the access to my backend to only Azure Front Door](../../frontdoor/front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-)
+It is not uncommon to see Azure Front Door deployed in front of Application Gateway. In order to make sure the traffic received by Application Gateway comes from the Front Door deployment, the best practice is to check if the `X-Azure-FDID` header contains the expected unique value. For more information on this, please see [How to lock down the access to my backend to only Azure Front Door](../../frontdoor/front-door-faq.yml#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-)
Logic: **not** p
$rule = New-AzApplicationGatewayFirewallCustomRule `
And here is the corresponding JSON: ```json
- {
- "customRules": [
- {
- "name": "blockNonAFDTraffic",
- "priority": 2,
- "ruleType": "MatchRule",
- "action": "Block",
- "matchConditions": [
- {
- "matchVariables": [
- {
- "variableName": "RequestHeaders",
- "selector": "X-Azure-FDID"
- }
- ],
- "operator": "Equal",
- "negationConditon": true,
- "matchValues": [
- "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- ],
- "transforms": [
- "Lowercase"
- ]
- }
- ]
- }
- ]
- }
+{
+ "customRules": [
+ {
+ "name": "blockNonAFDTraffic",
+ "priority": 2,
+ "ruleType": "MatchRule",
+ "action": "Block",
+ "matchConditions": [
+ {
+ "matchVariables": [
+ {
+ "variableName": "RequestHeaders",
+ "selector": "X-Azure-FDID"
+ }
+ ],
+ "operator": "Equal",
+ "negationConditon": true,
+ "matchValues": [
+ "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ],
+ "transforms": [
+ "Lowercase"
+ ]
+ }
+ ]
+ }
+ ]
+}
``` ## Next steps
web-application-firewall Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/resource-manager-template-samples.md
Last updated 09/28/2019
-# Azure Resource Manager templates for Azure Web Application Firewall
+# Azure Resource Manager templates for Azure Application Gateway and Web Application Firewall
-The following table includes links to Azure Resource Manager templates for Azure Web Application Firewall.
+The following table includes links to Azure Resource Manager templates for Azure Application Gateway and Web Application Firewall.
| Template | Description | | -- | -- |