Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Concept Certificate Based Authentication Certificateuserids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md | GET https://graph.microsoft.com/v1.0/users?$filter=startswith(certificateUserIds GET https://graph.microsoft.com/v1.0/users?$filter=certificateUserIds eq 'bob@contoso.com' ``` +## Update certificate user IDs using Microsoft Graph queries +PATCH the user object certificateUserIds value for a given userId ++#### Request body: ++```http +PATCH https://graph.microsoft.us/v1.0/users/{id} +Content-Type: application/json +{ ++ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users(authorizationInfo,department)/$entity", + "department": "Accounting", + "authorizationInfo": { + "certificateUserIds": [ + "X509:<PN>123456789098765@mil" + ] + } +} +``` ++ ## Next steps - [Overview of Azure AD CBA](concept-certificate-based-authentication.md) |
active-directory | Howto Mfa App Passwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md | Modern authentication is supported for the Microsoft Office 2013 clients and lat This article shows you how to use app passwords for legacy applications that don't support multi-factor authentication prompts. >[!NOTE]-> App passwords don't work with Conditional Access based multi-factor authentication policies and modern authentication. +> App passwords don't work with Conditional Access based multi-factor authentication policies and modern authentication. App passwords only work with legacy authentication protocols such as IMAP and SMTP. ## Overview and considerations |
active-directory | Howto Mfaserver Dir Ldap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-dir-ldap.md | Title: LDAP Authentication and Azure MFA Server - Azure Active Directory + Title: LDAP Authentication and Azure Multi-Factor Authentication Server - Azure Active Directory description: Deploying LDAP Authentication and Azure Multi-Factor Authentication Server. Previously updated : 07/11/2018 Last updated : 10/31/2022 -By default, the Azure Multi-Factor Authentication Server is configured to import or synchronize users from Active Directory. However, it can be configured to bind to different LDAP directories, such as an ADAM directory, or specific Active Directory domain controller. When connected to a directory via LDAP, the Azure Multi-Factor Authentication Server can act as an LDAP proxy to perform authentications. It also allows for the use of LDAP bind as a RADIUS target, for pre-authentication of users with IIS Authentication, or for primary authentication in the Azure MFA user portal. +By default, the Azure Multi-Factor Authentication Server is configured to import or synchronize users from Active Directory. However, it can be configured to bind to different LDAP directories, such as an ADAM directory, or specific Active Directory domain controller. When connected to a directory via LDAP, the Azure Multi-Factor Authentication Server can act as an LDAP proxy to perform authentications. Azure Multi-Factor Authentication Server can also use LDAP bind as a RADIUS target to pre-authenticate IIS users, or for primary authentication in the Azure Multi-Factor Authentication user portal. To use Azure Multi-Factor Authentication as an LDAP proxy, insert the Azure Multi-Factor Authentication Server between the LDAP client (for example, VPN appliance, application) and the LDAP directory server. The Azure Multi-Factor Authentication Server must be configured to communicate with both the client servers and the LDAP directory. In this configuration, the Azure Multi-Factor Authentication Server accepts LDAP requests from client servers and applications and forwards them to the target LDAP directory server to validate the primary credentials. If the LDAP directory validates the primary credentials, Azure Multi-Factor Authentication performs a second identity verification and sends a response back to the LDAP client. The entire authentication succeeds only if both the LDAP server authentication and the second-step verification succeed. > [!IMPORTANT]-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. +> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure Multi-Factor Authentication service by using the latest Migration Utility included in the most recent [Azure Multi-Factor Authentication Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure Multi-Factor Authentication Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md). >-> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md). -> -> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. +> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). ## Configure LDAP authentication To configure LDAP authentication, install the Azure Multi-Factor Authentication 4. If you plan to use LDAPS from the client to the Azure Multi-Factor Authentication Server, an TLS/SSL certificate must be installed on the same server as MFA Server. Click **Browse** next to the SSL (TLS) certificate box, and select a certificate to use for the secure connection. 5. Click **Add**. 6. In the Add LDAP Client dialog box, enter the IP address of the appliance, server, or application that authenticates to the Server and an Application name (optional). The Application name appears in Azure Multi-Factor Authentication reports and may be displayed within SMS or Mobile App authentication messages.-7. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users have not yet been imported into the Server and/or are exempt from two-step verification, leave the box unchecked. See the MFA Server help file for additional information on this feature. +7. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to two-step verification. If a significant number of users haven't yet been imported into the Server and/or are exempt from two-step verification, leave the box unchecked. See the MFA Server help file for additional information on this feature. -Repeat these steps to add additional LDAP clients. +Repeat these steps to add more LDAP clients. ### Configure the LDAP directory connection When the Azure Multi-Factor Authentication is configured to receive LDAP authent 12. Click the **Company Settings** icon and select the **Username Resolution** tab. 13. If you're connecting to Active Directory from a domain-joined server, leave the **Use Windows security identifiers (SIDs) for matching usernames** radio button selected. Otherwise, select the **Use LDAP unique identifier attribute for matching usernames** radio button. -When the **Use LDAP unique identifier attribute for matching usernames** radio button is selected, the Azure Multi-Factor Authentication Server attempts to resolve each username to a unique identifier in the LDAP directory. An LDAP search is performed on the Username attributes defined in the Directory Integration -> Attributes tab. When a user authenticates, the username is resolved to the unique identifier in the LDAP directory. The unique identifier is used for matching the user in the Azure Multi-Factor Authentication data file. This allows for case-insensitive comparisons, and long and short username formats. +When the **Use LDAP unique identifier attribute for matching usernames** radio button is selected, the Azure Multi-Factor Authentication Server attempts to resolve each username to a unique identifier in the LDAP directory. An LDAP search is performed on the Username attributes defined in the Directory Integration > Attributes tab. When a user authenticates, the username is resolved to the unique identifier in the LDAP directory. The unique identifier is used for matching the user in the Azure Multi-Factor Authentication data file. This allows for case-insensitive comparisons, and long and short username formats. After you complete these steps, the MFA Server listens on the configured ports for LDAP access requests from the configured clients, and acts as a proxy for those requests to the LDAP directory for authentication. After you complete these steps, the MFA Server listens on the configured ports f To configure the LDAP client, use the guidelines: -* Configure your appliance, server, or application to authenticate via LDAP to the Azure Multi-Factor Authentication Server as though it were your LDAP directory. Use the same settings that you would normally use to connect directly to your LDAP directory, except for the server name or IP address, which will be that of the Azure Multi-Factor Authentication Server. -* Configure the LDAP timeout to 30-60 seconds so that there is time to validate the user's credentials with the LDAP directory, perform the second-step verification, receive their response, and respond to the LDAP access request. +* Configure your appliance, server, or application to authenticate via LDAP to the Azure Multi-Factor Authentication Server as though it were your LDAP directory. Use the same settings that you normally use to connect directly to your LDAP directory, but use the Azure Multi-Factor Authentication Server for the server name or IP address. +* Configure the LDAP timeout to 30-60 seconds to provide enough time to validate the user's credentials with the LDAP directory, perform the second-step verification, receive their response, and respond to the LDAP access request. * If using LDAPS, the appliance or server making the LDAP queries must trust the TLS/SSL certificate installed on the Azure Multi-Factor Authentication Server. |
active-directory | Howto Mfaserver Iis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-iis.md | Title: IIS Authentication and Azure MFA Server - Azure Active Directory + Title: IIS Authentication and Azure Multi-Factor Authentication Server - Azure Active Directory description: Deploying IIS Authentication and Azure Multi-Factor Authentication Server. Previously updated : 07/11/2018 Last updated : 10/31/2022 -Use the IIS Authentication section of the Azure Multi-Factor Authentication (MFA) Server to enable and configure IIS authentication for integration with Microsoft IIS web applications. The Azure MFA Server installs a plug-in that can filter requests being made to the IIS web server to add Azure Multi-Factor Authentication. The IIS plug-in provides support for Form-Based Authentication and Integrated Windows HTTP Authentication. Trusted IPs can also be configured to exempt internal IP addresses from two-factor authentication. +Use the IIS Authentication section of the Azure Multi-Factor Authentication (MFA) Server to enable and configure IIS authentication for integration with Microsoft IIS web applications. The Azure Multi-Factor Authentication Server installs a plug-in that can filter requests being made to the IIS web server to add Azure Multi-Factor Authentication. The IIS plug-in provides support for Form-Based Authentication and Integrated Windows HTTP Authentication. Trusted IPs can also be configured to exempt internal IP addresses from two-factor authentication. > [!IMPORTANT]-> As of July 1, 2019, Microsoft no longer offers MFA Server for new deployments. New customers that want to require multi-factor authentication (MFA) during sign-in events should use cloud-based Azure AD Multi-Factor Authentication. -> -> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](tutorial-enable-azure-mfa.md). -> -> Existing customers that activated MFA Server before July 1, 2019 can download the latest version, future updates, and generate activation credentials as usual. +> In September 2022, Microsoft announced deprecation of Azure Multi-Factor Authentication Server. Beginning September 30, 2024, Azure Multi-Factor Authentication Server deployments will no longer service multifactor authentication (MFA) requests, which could cause authentications to fail for your organization. To ensure uninterrupted authentication services and to remain in a supported state, organizations should [migrate their usersΓÇÖ authentication data](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md) to the cloud-based Azure Multi-Factor Authentication service by using the latest Migration Utility included in the most recent [Azure Multi-Factor Authentication Server update](https://www.microsoft.com/download/details.aspx?id=55849). For more information, see [Azure Multi-Factor Authentication Server Migration](how-to-migrate-mfa-server-to-azure-mfa.md). >+> To get started with cloud-based MFA, see [Tutorial: Secure user sign-in events with Azure Multi-Factor Authentication](tutorial-enable-azure-mfa.md). +>> > When you use cloud-based Azure Multi-Factor Authentication, there is no alternative to the IIS plugin provided by Azure Multi-Factor Authentication (MFA) Server. Instead, use Web Application Proxy (WAP) with Active Directory Federation Services (AD FS) or Azure Active Directory's Application Proxy.  To secure an IIS web application that uses form-based authentication, install th 2. Click the **Form-Based** tab. 3. Click **Add**. 4. To detect username, password and domain variables automatically, enter the Login URL (like `https://localhost/contoso/auth/login.aspx`) within the Auto-Configure Form-Based Website dialog box and click **OK**.-5. Check the **Require Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users have not yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked. -6. If the page variables cannot be detected automatically, click **Specify Manually** in the Auto-Configure Form-Based Website dialog box. +5. Check the **Require Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users haven't yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked. +6. If the page variables can't be detected automatically, click **Specify Manually** in the Auto-Configure Form-Based Website dialog box. 7. In the Add Form-Based Website dialog box, enter the URL to the login page in the Submit URL field and enter an Application name (optional). The Application name appears in Azure Multi-Factor Authentication reports and may be displayed within SMS or Mobile App authentication messages. 8. Select the correct Request format. This is set to **POST or GET** for most web applications. 9. Enter the Username variable, Password variable, and Domain variable (if it appears on the login page). To find the names of the input boxes, navigate to the login page in a web browser, right-click on the page, and select **View Source**.-10. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users have not yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked. +10. Check the **Require Azure Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users haven't yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked. 11. Click **Advanced** to review advanced settings, including: - Select a custom denial page file To secure an IIS web application that uses form-based authentication, install th ## Using integrated Windows authentication with Azure Multi-Factor Authentication Server -To secure an IIS web application that uses Integrated Windows HTTP authentication, install the Azure MFA Server on the IIS web server, then configure the Server with the following steps: +To secure an IIS web application that uses Integrated Windows HTTP authentication, install the Azure Multi-Factor Authentication Server on the IIS web server, then configure the Server with the following steps: 1. In the Azure Multi-Factor Authentication Server, click the IIS Authentication icon in the left menu. 2. Click the **HTTP** tab. 3. Click **Add**. 4. In the Add Base URL dialogue box, enter the URL for the website where HTTP authentication is performed (like `http://localhost/owa`) and provide an Application name (optional). The Application name appears in Azure Multi-Factor Authentication reports and may be displayed within SMS or Mobile App authentication messages.-5. Adjust the Idle timeout and Maximum session times if the default is not sufficient. -6. Check the **Require Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users have not yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked. +5. Adjust the Idle timeout and Maximum session times if the default isn't sufficient. +6. Check the **Require Multi-Factor Authentication user match** box if all users have been or will be imported into the Server and subject to multi-factor authentication. If a significant number of users haven't yet been imported into the Server and/or will be exempt from multi-factor authentication, leave the box unchecked. 7. Check the **Cookie cache** box if desired. 8. Click **OK**. To secure an IIS web application that uses Integrated Windows HTTP authenticatio After configuring the Form-Based or HTTP authentication URLs and settings, select the locations where the Azure Multi-Factor Authentication IIS plug-ins should be loaded and enabled in IIS. Use the following procedure: -1. If running on IIS 6, click the **ISAPI** tab. Select the website that the web application is running under (e.g. Default Web Site) to enable the Azure Multi-Factor Authentication ISAPI filter plug-in for that site. +1. If running on IIS 6, click the **ISAPI** tab. Select the website that the web application is running under (for example, Default Web Site) to enable the Azure Multi-Factor Authentication ISAPI filter plug-in for that site. 2. If running on IIS 7 or higher, click the **Native Module** tab. Select the server, websites, or applications to enable the IIS plug-in at the desired levels. 3. Click the **Enable IIS authentication** box at the top of the screen. Azure Multi-Factor Authentication is now securing the selected IIS application. Ensure that users have been imported into the Server. ## Trusted IPs -The Trusted IPs allows users to bypass Azure Multi-Factor Authentication for website requests originating from specific IP addresses or subnets. For example, you may want to exempt users from Azure Multi-Factor Authentication while logging in from the office. For this, you would specify the office subnet as a Trusted IPs entry. To configure Trusted IPs, use the following procedure: +The Trusted IPs allows users to bypass Azure Multi-Factor Authentication for website requests originating from specific IP addresses or subnets. For example, you may want to exempt users from Azure Multi-Factor Authentication while logging in from the office. In that case, you can specify the office subnet as a Trusted IPs entry. To configure Trusted IPs, use the following procedure: 1. In the IIS Authentication section, click the **Trusted IPs** tab. 2. Click **Add**. |
active-directory | Concept Conditional Access Conditions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md | On Windows 7, iOS, Android, and macOS Azure AD identifies the device using a cli #### Chrome support -For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows 10 Accounts](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji) or [Office Online](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb) extensions. These extensions are required when a Conditional Access policy requires device-specific details. +For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows Accounts](https://chrome.google.com/webstore/detail/windows-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji) or [Office](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb) extensions. These extensions are required when a Conditional Access policy requires device-specific details. To automatically deploy this extension to Chrome browsers, create the following registry key: |
active-directory | Access Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md | Check out [Primary Refresh Tokens](../devices/concept-primary-refresh-token.md) ## Next steps - Learn about [`id_tokens` in Azure AD](id-tokens.md).-- Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](v2-permissions-and-consent.md)).+- Learn about permission and consent ( [v1.0](../azuread-dev/v1-permissions-consent.md), [v2.0](permissions-consent-overview.md)). |
active-directory | Active Directory V2 Protocols | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md | Your client app needs a way to trust the security tokens issued to it by the Mic When you register your app in Azure AD, the Microsoft identity platform automatically assigns it some values, while others you configure based on the application's type. -Two the most commonly referenced app registration settings are: +Two of the most commonly referenced app registration settings are: * **Application (client) ID** - Also called _application ID_ and _client ID_, this value is assigned to your app by the Microsoft identity platform. The client ID uniquely identifies your app in the identity platform and is included in the security tokens the platform issues. * **Redirect URI** - The authorization server uses a redirect URI to direct the resource owner's *user-agent* (web browser, mobile app) to another destination after completing their interaction. For example, after the end-user authenticates with the authorization server. Not all client types use redirect URIs. |
active-directory | Application Consent Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-consent-experience.md | Title: Azure AD app consent experiences description: Learn more about the Azure AD consent experiences to see how you can use it when managing and developing applications on Azure AD -+ - Previously updated : 04/18/2022-- Last updated : 11/01/2022++ -# Understanding Azure AD application consent experiences --Learn more about the Azure Active Directory (Azure AD) application consent user experience. So you can intelligently manage applications for your organization and/or develop applications with a more seamless consent experience. +# Consent experience for applications in Azure Active Directory -## Consent and permissions +In this article, you'll learn about the Azure Active Directory (Azure AD) application consent user experience. You'll then be able to intelligently manage applications for your organization and/or develop applications with a more seamless consent experience. Consent is the process of a user granting authorization to an application to access protected resources on their behalf. An admin or user can be asked for consent to allow access to their organization/individual data. The following diagram and table provide information about the building blocks of | 2 | Title | The title changes based on whether the users are going through the user or admin consent flow. In user consent flow, the title will be ΓÇ£Permissions requestedΓÇ¥ while in the admin consent flow the title will have an additional line ΓÇ£Accept for your organizationΓÇ¥. | | 3 | App logo | This image should help users have a visual cue of whether this app is the app they intended to access. This image is provided by application developers and the ownership of this image isn't validated. | | 4 | App name | This value should inform users which application is requesting access to their data. Note this name is provided by the developers and the ownership of this app name isn't validated.|-| 5 | Publisher name and verification | The blue "verified" badge means that the app publisher has verified their identity using a Microsoft Partner Network account and has completed the verification process. If the app is publisher verified, the publisher name is displayed. If the app is not publisher verified, "Unverified" is displayed instead of a publisher name. For more information, read about [Publisher Verification](publisher-verification-overview.md). Selecting the publisher name displays more app info as available, such as the publisher name, publisher domain, date created, certification details, and reply URLs. | +| 5 | Publisher name and verification | The blue "verified" badge means that the app publisher has verified their identity using a Microsoft Partner Network account and has completed the verification process. If the app is publisher verified, the publisher name is displayed. If the app isn't publisher verified, "Unverified" is displayed instead of a publisher name. For more information, read about [Publisher Verification](publisher-verification-overview.md). Selecting the publisher name displays more app info as available, such as the publisher name, publisher domain, date created, certification details, and reply URLs. | | 6 | Microsoft 365 Certification | The Microsoft 365 Certification logo means that an app has been vetted against controls derived from leading industry standard frameworks, and that strong security and compliance practices are in place to protect customer data. For more information, read about [Microsoft 365 Certification](/microsoft-365-app-certification/docs/enterprise-app-certification-guide).| | 7 | Publisher information | Displays whether the application is published by Microsoft. |-| 8 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer it is best to request access, to the permissions with the least privilege. | +| 8 | Permissions | This list contains the permissions being requested by the client application. Users should always evaluate the types of permissions being requested to understand what data the client application will be authorized to access on their behalf if they accept. As an application developer it's best to request access, to the permissions with the least privilege. | | 9 | Permission description | This value is provided by the service exposing the permissions. To see the permission descriptions, you must toggle the chevron next to the permission. | | 10 | https://myapps.microsoft.com | This is the link where users can review and remove any non-Microsoft applications that currently have access to their data. | | 11 | Report it here | This link is used to report a suspicious app if you don't trust the app, if you believe the app is impersonating another app, if you believe the app will misuse your data, or for some other reason. | -## App requires a permission within the user's scope of authority +## Common scenarios and consent experiences -A common consent scenario is that the user accesses an app which requires a permission set that is within the user's scope of authority. The user is directed to the user consent flow. +The following section describes the common scenarios and the expected consent experience for each of them. +### App requires a permission that the user has the right to grant -Admins will see an additional control on the traditional consent prompt that will allow them consent on behalf of the entire tenant. The control will be defaulted to off, so only when admins explicitly check the box will consent be granted on behalf of the entire tenant. As of today, this check box will only show for the Global Admin role, so Cloud Admin and App Admin will not see this checkbox. +In this consent scenario, the user accesses an app that requires a permission set that is within the user's scope of authority. The user is directed to the user consent flow. ++Admins will see an additional control on the traditional consent prompt that will allow to give consent on behalf of the entire tenant. The control will be defaulted to off, so only when admins explicitly check the box will consent be granted on behalf of the entire tenant. The check box will only show for the Global Admin role, so Cloud Admin and App Admin won't see this checkbox. :::image type="content" source="./media/application-consent-experience/consent_prompt_1a.png" alt-text="Consent prompt for scenario 1a"::: Users will see the traditional consent prompt. :::image type="content" source="./media/application-consent-experience/consent_prompt_1b.png" alt-text="Screenshot that shows the traditional consent prompt."::: -## App requires a permission outside of the user's scope of authority +### App requires a permission that the user has no right to grant -Another common consent scenario is that the user accesses an app which requires at least one permission that is outside the user's scope of authority. +In this consent scenario, the user accesses an app that requires at least one permission that is outside the user's scope of authority. Admins will see an additional control on the traditional consent prompt that will allow them consent on behalf of the entire tenant. :::image type="content" source="./media/application-consent-experience/consent_prompt_1a.png" alt-text="Consent prompt for scenario 1a"::: -Non-admin users will be blocked from granting consent to the application, and they will be told to ask their admin for access to the app. +Non-admin users will be blocked from granting consent to the application, and they'll be told to ask their admin for access to the app. If admin consent workflow is enabled in the user's tenant, non-admin users are able to submit a request for admin approval from the consent prompt. For more information on admin consent workflow, see [Admin consent workflow](../manage-apps/admin-consent-workflow-overview.md). :::image type="content" source="./media/application-consent-experience/consent_prompt_2b.png" alt-text="Screenshot of the consent prompt telling the user to ask an admin for access to the app."::: -## User is directed to the admin consent flow +### User is directed to the admin consent flow -Another common scenario is when the user navigates to or is directed to the admin consent flow. +In this consent scenario, the user navigates to or is directed to the admin consent flow. Admin users will see the admin consent prompt. The title and the permission descriptions changed on this prompt, the changes highlight the fact that accepting this prompt will grant the app access to the requested data on behalf of the entire tenant. :::image type="content" source="./media/application-consent-experience/consent_prompt_3a.png" alt-text="Consent prompt for scenario 3a"::: -Non-admin users will be blocked from granting consent to the application, and they will be told to ask their admin for access to the app. +Non-admin users will be blocked from granting consent to the application, and they'll be told to ask their admin for access to the app. :::image type="content" source="./media/application-consent-experience/consent_prompt_2b.png" alt-text="Screenshot of the consent prompt telling the user to ask an admin for access to the app."::: +### Admin consent through the Azure portal ++In this scenario, an administrator consents to all of the permissions that an application requests, which can include delegated permissions on behalf of all users in the tenant. The Administrator grants consent through the **API permissions** page of the application registration in the [Azure portal](https://portal.azure.com). ++ :::image type="content" source="./media/consent-framework/grant-consent.png" alt-text="Screenshot of explicit admin consent through the Azure portal." lightbox="./media/consent-framework/grant-consent.png"::: ++All users in that tenant won't see the consent dialog unless the application requires new permissions. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md). ++ > [!IMPORTANT] + > Granting explicit consent using the **Grant permissions** button is currently required for single-page applications (SPA) that use MSAL.js. Otherwise, the application fails when the access token is requested. ++## Common Issues +This section outlines the common issues with the consent experience and possible troubleshooting tips. ++- 403 error ++ - Is this a [delegated scenario](permissions-consent-overview.md)? What permissions does a user have? + - Are necessary permissions added to use the endpoint? + - Check the [token](https://jwt.ms/) to see if it has necessary claims to call the endpoint. + - What permissions have been consented to? Who consented? ++- User is unable to consent ++ - Check if tenant admin has disabled user consent for your organization + - Confirm if the permissions you requesting are admin-restricted permissions. ++- User is still blocked even after admin has consented ++ - Check if [static permissions](consent-types-developer.md) are configured to be a superset of permissions requested dynamically. + - Check if user assignment is required for the app. ++## Troubleshoot known errors ++For troubleshooting steps, see [Unexpected error when performing consent to an application](../manage-apps/application-sign-in-unexpected-user-consent-error.md). ## Next steps - Get a step-by-step overview of [how the Azure AD consent framework implements consent](./quickstart-register-app.md). |
active-directory | Consent Framework | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework.md | - Title: Microsoft identity platform consent framework -description: Learn about the consent framework in the Microsoft identity platform and how it applies to multi-tenant applications. -------- Previously updated : 03/29/2022------# Microsoft identity platform consent framework --Multi-tenant applications allow sign-ins by user accounts from Azure AD tenants other than the tenant in which the app was initially registered. The Microsoft identity platform consent framework enables a tenant administrator or user in these other tenants to consent to (or deny) an application's request for permission to access their resources. --For example, perhaps a web application requires read-only access to a user's calendar in Microsoft 365. It's the identity platform's consent framework that enables the prompt asking the user to consent to the app's request for permission to read their calendar. If the user consents, the application is able to call the Microsoft Graph API on their behalf and get their calendar data. --## Consent experience - an example --The following steps show you how the consent experience works for both the application developer and the user. --1. Assume you have a web client application that needs to request specific permissions to access a resource/API. You'll learn how to do this configuration in the next section, but essentially the Azure portal is used to declare permission requests at configuration time. Like other configuration settings, they become part of the application's Azure AD registration: -- :::image type="content" source="./media/consent-framework/permissions.png" alt-text="Permissions to other applications" lightbox="./media/consent-framework/permissions.png"::: --1. Consider that your applicationΓÇÖs permissions have been updated, the application is running, and a user is about to use it for the first time. First, the application needs to obtain an authorization code from Azure ADΓÇÖs `/authorize` endpoint. The authorization code can then be used to acquire a new access and refresh token. --1. If the user is not already authenticated, Azure AD's `/authorize` endpoint prompts the user to sign in. -- :::image type="content" source="./media/consent-framework/usersignin.png" alt-text="User or administrator sign in to Azure AD"::: --1. After the user has signed in, Azure AD will determine if the user needs to be shown a consent page. This determination is based on whether the user (or their organizationΓÇÖs administrator) has already granted the application consent. If consent has not already been granted, Azure AD prompts the user for consent and displays the required permissions it needs to function. The set of permissions that are displayed in the consent dialog match the ones selected in the **Delegated permissions** in the Azure portal. -- :::image type="content" source="./media/consent-framework/consent.png" alt-text="Shows an example of permissions displayed in the consent dialog"::: --1. After the user grants consent, an authorization code is returned to your application, which is redeemed to acquire an access token and refresh token. For more information about this flow, see [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md). --1. As an administrator, you can also consent to an application's delegated permissions on behalf of all the users in your tenant. Administrative consent prevents the consent dialog from appearing for every user in the tenant, and can be done in the [Azure portal](https://portal.azure.com) by users with the administrator role. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md). -- **To consent to an app's delegated permissions** -- 1. Go to the **API permissions** page for your application - 1. Click on the **Grant admin consent** button. -- :::image type="content" source="./media/consent-framework/grant-consent.png" alt-text="Grant permissions for explicit admin consent" lightbox="./media/consent-framework/grant-consent.png"::: -- > [!IMPORTANT] - > Granting explicit consent using the **Grant permissions** button is currently required for single-page applications (SPA) that use MSAL.js. Otherwise, the application fails when the access token is requested. --## Next steps --See [how to convert an app to multi-tenant](howto-convert-app-to-be-multi-tenant.md) |
active-directory | Consent Types Developer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-types-developer.md | + + Title: Microsoft identity platform developers' guide to requesting permissions through consent +description: Learn how developers can request for permissions through consent in the Microsoft identity platform endpoint. ++++++++ Last updated : 11/01/2022++++# Requesting permissions through consent +++Applications in the Microsoft identity platform rely on consent in order to gain access to necessary resources or APIs. Different types of consent are better for different application scenarios. Choosing the best approach to consent for your app will help it be more successful with users and organizations. ++In this article, you'll learn about the different types of consent and how to request permissions for your application through consent. ++## Static user consent ++In the static user consent scenario, you must specify all the permissions it needs in the app's configuration in the Azure portal. If the user (or administrator, as appropriate) hasn't granted consent for this app, then Microsoft identity platform will prompt the user to provide consent at this time. ++Static permissions also enable administrators to consent on behalf of all users in the organization. ++While relying on static consent and a single permissions list keeps the code nice and simple, it also means that your app will request all of the permissions it might ever need up front. This can discourage users and admins from approving your app's access request. ++## Incremental and dynamic user consent ++With the Microsoft identity platform endpoint, you can ignore the static permissions defined in the application registration information in the Azure portal. Instead, you can request permissions incrementally. You can ask for a bare minimum set of permissions upfront and request more over time as the customer uses additional application features. To do so, you can specify the scopes your application needs at any time by including the new scopes in the `scope` parameter when [requesting an access token](#requesting-individual-user-consent) - without the need to pre-define them in the application registration information. If the user hasn't yet consented to new scopes added to the request, they'll be prompted to consent only to the new permissions. Incremental, or dynamic consent, only applies to delegated permissions and not to application permissions. ++Allowing an application to request permissions dynamically through the `scope` parameter gives developers full control over your user's experience. You can also front load your consent experience and ask for all permissions in one initial authorization request. If your application requires a large number of permissions, you can gather those permissions from the user incrementally as they try to use certain features of the application over time. ++> [!IMPORTANT] +> Dynamic consent can be convenient, but presents a big challenge for permissions that require admin consent. The admin consent experience in the **App registrations** and **Enterprise applications** blades in the portal doesn't know about those dynamic permissions at consent time. We recommend that a developer list all the admin privileged permissions that are needed by the application in the portal. This enables tenant admins to consent on behalf of all their users in the portal, once. Users won't need to go through the consent experience for those permissions on sign in. The alternative is to use dynamic consent for those permissions. To grant admin consent, an individual admin signs in to the app, triggers a consent prompt for the appropriate permissions, and selects **consent for my entire org** in the consent dialogue. ++## Requesting individual user consent ++In an [OpenID Connect or OAuth 2.0](active-directory-v2-protocols.md) authorization request, an application can request the permissions it needs by using the `scope` query parameter. For example, when a user signs in to an app, the application sends a request like the following example. (Line breaks are added for legibility). ++```HTTP +GET https://login.microsoftonline.com/common/oauth2/v2.0/authorize? +client_id=6731de76-14a6-49ae-97bc-6eba6914391e +&response_type=code +&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F +&response_mode=query +&scope= +https%3A%2F%2Fgraph.microsoft.com%2Fcalendars.read%20 +https%3A%2F%2Fgraph.microsoft.com%2Fmail.send +&state=12345 +``` ++The `scope` parameter is a space-separated list of delegated permissions that the application is requesting. Each permission is indicated by appending the permission value to the resource's identifier (the application ID URI). In the request example, the application needs permission to read the user's calendar and send mail as the user. ++After the user enters their credentials, the Microsoft identity platform checks for a matching record of *user consent*. If the user hasn't consented to any of the requested permissions in the past, and if the administrator hasn't consented to these permissions on behalf of the entire organization, the Microsoft identity platform asks the user to grant the requested permissions. +++In the following example, the `offline_access` ("Maintain access to data you have given it access to") permission and `User.Read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are required for proper application functionality. The `offline_access` permission gives the application access to refresh tokens that are critical for native apps and web apps. The `User.Read` permission gives access to the `sub` claim. It allows the client or application to correctly identify the user over time and access rudimentary user information. +++When the user approves the permission request, consent is recorded. The user doesn't have to consent again when they later sign in to the application. ++## Requesting consent for an entire tenant through admin consent ++Requesting consent for an entire tenant requires admin consent. Admin consent done on behalf of an organization requires the static permissions registered for the app. Set those permissions in the app registration portal if you need an admin to give consent on behalf of the entire organization. ++### Admin Consent for Delegated Permissions ++When your application requests [delegated permissions that require admin consent](scopes-oidc.md#admin-restricted-permissions), the user receives an error message that says they're unauthorized to consent to your app's permissions. The user is required to ask their admin for access to the app. If the admin grants consent for the entire tenant, the organization's users don't see a consent page for the application unless the previously granted permissions are revoked or the application requests for a new permission incrementally. ++Administrators using the same application will see the admin consent prompt. The admin consent prompt provides a checkbox that allows them to grant the application access to the requested data on behalf of the users for the entire tenant. For more information on the user and admin consent experience, see [Application consent experience](application-consent-experience.md). ++Examples of delegated permissions for Microsoft Graph that require admin consent are: ++- Read all user's full profiles by using User.Read.All +- Write data to an organization's directory by using Directory.ReadWrite.All +- Read all groups in an organization's directory by using Groups.Read.All ++To view the full list of Microsoft graph permissions, see [Microsoft graph permissions reference](/graph/permissions-reference). ++You can also configure permissions on your own resources to require admin consent. For more information on how to add scopes that require admin consent, see [Add a scope that requires admin consent](quickstart-configure-app-expose-web-apis.md#add-a-scope-requiring-admin-consent). ++Some organizations may change the default user consent policy for the tenant. When your application requests access to permissions they're evaluated against these policies. The user may need to request admin consent even when not required by default. To learn how administrators manage consent policies for applications, see [Manage app consent policies](../manage-apps/manage-app-consent-policies.md). ++>[!NOTE] +>In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, scope=User.Read is equivalent to `https://graph.microsoft.com/User.Read`. ++### Admin Consent for Application permissions ++Application permissions always require admin consent. Application permissions don't have a user context and the consent grant isn't done on behalf of any specific user. Instead, the client application is granted permissions directly, these types of permissions are used only by daemon services and other non-interactive applications that run in the background. Administrators need to configure the permissions upfront and [grant admin consent](../manage-apps/grant-admin-consent.md) through the Azure portal. ++### Admin consent for Multi-tenant applications ++In case the application requesting the permission is a multi-tenant application, its application registration only exists in the tenant where it was created, therefore permissions can't be configured in the local tenant. If the application requests permissions that require admin consent, the administrator needs to consent on behalf of the users. To consent to these permissions, the administrators need to log in to the application themselves, so the admin consent sign-in experience is triggered. To learn how to set up the admin consent experience for multi-tenant applications, see [Enable multi-tenant log-ins](howto-convert-app-to-be-multi-tenant.md#understand-user-and-admin-consent-and-make-appropriate-code-changes) ++An administrator can grant consent for an application with the following options. ++### Recommended: Sign the user into your app ++Typically, when you build an application that requires admin consent, the application needs a page or view in which the admin can approve the app's permissions. This page can be: ++- Part of the app's sign-up flow. +- Part of the app's settings. +- A dedicated "connect" flow. ++In many cases, it makes sense for the application to show the "connect" view only after a user has signed in with a work Microsoft account or school Microsoft account. ++When you sign the user into your app, you can identify the organization to which the admin belongs before you ask them to approve the necessary permissions. Although this step isn't strictly necessary, it can help you create a more intuitive experience for your organizational users. ++To sign the user in, follow the [Microsoft identity platform protocol tutorials](active-directory-v2-protocols.md). ++### Request the permissions in the app registration portal ++In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `.default` scope and the Azure portal's **Grant admin consent** option. ++In general, the permissions should be statically defined for a given application. They should be a superset of the permissions that the application will request dynamically or incrementally. ++> [!NOTE] +>Application permissions can be requested only through the use of [`.default`](scopes-oidc.md#the-default-scope). So if your application needs application permissions, make sure they're listed in the app registration portal. ++To configure the list of statically requested permissions for an application: ++1. Go to your application in the <a href="https://go.microsoft.com/fwlink/?linkid=2083908" target="_blank">Azure portal - App registrations</a> quickstart experience. +1. Select an application, or [create an app](quickstart-register-app.md) if you haven't already. +1. On the application's **Overview** page, under **Manage**, select **API Permissions** > **Add a permission**. +1. Select **Microsoft Graph** from the list of available APIs. Then add the permissions that your application requires. +1. Select **Add Permissions**. ++### Successful response ++If the admin approves the permissions for your app, the successful response looks like this: ++```HTTP +GET http://localhost/myapp/permissions?tenant=a8990e1f-ff32-408a-9f8e-78d3b9139b95&state=state=12345&admin_consent=True +``` ++| Parameter | Description | +| | | +| `tenant` | The directory tenant that granted your application the permissions it requested, in GUID format. | +| `state` | A value included in the request that also will be returned in the token response. It can be a string of any content you want. The state is used to encode information about the user's state in the application before the authentication request occurred, such as the page or view they were on. | +| `admin_consent` | Will be set to `True`. | ++After you've received a successful response from the admin consent endpoint, your application has gained the permissions it requested. Next, you can request a token for the resource you want. +#### Error response ++If the admin doesn't approve the permissions for your app, the failed response looks like this: ++```HTTP +GET http://localhost/myapp/permissions?error=permission_denied&error_description=The+admin+canceled+the+request +``` ++| Parameter | Description | +| | | +| `error` | An error code string that can be used to classify types of errors that occur. It can also be used to react to errors. | +| `error_description` | A specific error message that can help a developer identify the root cause of an error. | ++## Using permissions after consent ++After the user consents to permissions for your app, your application can acquire access tokens that represent the app's permission to access a resource in some capacity. An access token can be used only for a single resource. But encoded inside the access token is every permission that your application has been granted for that resource. To acquire an access token, your application can make a request to the Microsoft identity platform token endpoint, like this: ++```HTTP +POST common/oauth2/v2.0/token HTTP/1.1 +Host: https://login.microsoftonline.com +Content-Type: application/json ++{ + "grant_type": "authorization_code", + "client_id": "6731de76-14a6-49ae-97bc-6eba6914391e", + "scope": "https://microsoft.graph.com/Mail.Read https://microsoft.graph.com/mail.send", + "code": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...", + "redirect_uri": "https://localhost/myapp", + "client_secret": "zc53fwe80980293klaj9823" // NOTE: Only required for web apps +} +``` ++You can use the resulting access token in HTTP requests to the resource. It reliably indicates to the resource that your application has the proper permission to do a specific task. ++For more information about the OAuth 2.0 protocol and how to get access tokens, see the [Microsoft identity platform endpoint protocol reference](active-directory-v2-protocols.md). ++## Next steps ++- [Consent experience](application-consent-experience.md) +- [ID tokens](id-tokens.md) +- [Access tokens](access-tokens.md) |
active-directory | Delegated Access Primer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-access-primer.md | + + Title: Microsoft identity platform delegated access scenario +description: Learn about delegated access in the Microsoft identity platform endpoint. ++++++++ Last updated : 11/01/2022+++++# Understanding delegated access ++When a user signs into an app and uses it to access some other resource, like Microsoft Graph, the app will first need to ask for permission to access this resource on the userΓÇÖs behalf. This common scenario is called delegated access. ++> [!VIDEO https://learn-video.azurefd.net/vod/player?show=one-dev-minute&ep=how-do-delegated-permissions-work] ++## Why should I use delegated access? ++People frequently use different applications to access their data from cloud services. For example, someone might want to use a favorite PDF reader application to view files stored in their OneDrive. Another example can be a companyΓÇÖs line-of-business application that might retrieve shared information about their coworkers so they can easily choose reviewers for a request. In such cases, the client application, the PDF reader, or the companyΓÇÖs request approval tool needs to be authorized to access this data on behalf of the user who signed into the application. ++Use delegated access whenever you want to let a signed-in user work with their own resources or resources they can access. Whether itΓÇÖs an admin setting up policies for their entire organization or a user deleting an email in their inbox, all scenarios involving user actions should use delegated access. ++In contrast, delegated access is usually a poor choice for scenarios that must run without a signed-in user, like automation. It may also be a poor choice for scenarios that involve accessing many usersΓÇÖ resources, like data loss prevention or backups. Consider using [application-only access](permissions-consent-overview.md) for these types of operations. ++## Requesting scopes as a client app ++Your app will need to ask the user to grant a specific scope, or set of scopes, for the resource app you want to access. Scopes may also be referred to as delegated permissions. These scopes describe which resources and operations your app wants to perform on the userΓÇÖs behalf. For example, if you want your app to show the user a list of recently received mail messages and chat messages, you might ask the user to consent to the Microsoft Graph `Mail.Read` and `Chat.Read` scopes. ++Once your app has requested a scope, a user or admin will need to grant the requested access. Consumer users with Microsoft Accounts, like Outlook.com or Xbox Live accounts, can always grant scopes for themselves. Organizational users with Azure AD accounts may or may not be able to grant scopes, depending on their organizationΓÇÖs settings. If an organizational user can't consent to scopes directly, they'll need to ask their organizationΓÇÖs administrator to consent for them. ++Always follow the principle of least privilege: you should never request scopes that your app doesnΓÇÖt need. This principle helps limit the security risk if your app is compromised and makes it easier for administrators to grant your app access. For example, if your app only needs to list the chats a user belongs to but doesnΓÇÖt need to show the chat messages themselves, you should request the more limited Microsoft Graph `Chat.ReadBasic` scope instead of `Chat.Read`. For more information about openID scopes, see [OpenID scopes](scopes-oidc.md). ++## Designing and publishing scopes for a resource service ++If youΓÇÖre building an API and want to allow delegated access on behalf of users, youΓÇÖll need to create scopes that other apps can request. These scopes should describe the actions or resources available to the client. You should consider developer scenarios when designing your scopes. +++## How does delegated access work? ++The most important thing to remember about delegated access is that both your client app and the signed-in user need to be properly authorized. Granting a scope isn't enough. If either the client app doesnΓÇÖt have the right scope, or the user doesnΓÇÖt have sufficient rights to read or modify the resource, then the call will fail. ++- **Client app authorization** - Client apps are authorized by granting scopes. When a client app is granted a scope by a user or admin to access some resource, that grant will be recorded in Azure AD. All delegated access tokens that are requested by the client to access the resource on behalf of the relevant user will then contain those scopesΓÇÖ claim values in the `scp` claim. The resource app checks this claim to determine whether the client app has been granted the correct scope for the call. +- **User authorization** - Users are authorized by the resource youΓÇÖre calling. Resource apps may use one or more systems for user authorization, such as [role-based access control](custom-rbac-for-developers.md), ownership/membership relationships, access control lists, or other checks. For example, Azure AD checks that a user has been assigned to an app management or general admin role before allowing them to delete an organizationΓÇÖs applications, but also allows all users to delete applications that they own. Similarly, SharePoint Online service checks that a user has appropriate owner or reader rights over a file before allowing that user to open it. ++## Delegated access example ΓÇô OneDrive via Microsoft Graph ++Consider the following example: ++Alice wants to use a client app to open a file protected by a resource API, Microsoft Graph. For user authorization, the OneDrive service will check whether the file is stored in AliceΓÇÖs drive. If itΓÇÖs stored in another userΓÇÖs drive, then OneDrive will deny AliceΓÇÖs request as unauthorized, since Alice doesn't have the right to read other usersΓÇÖ drives. ++For client app authorization, OneDrive will check whether the client making the call has been granted the `Files.Read` scope on behalf of the signed-in user. In this case, the signed-in user is Alice. If `Files.Read` hasnΓÇÖt been granted to the app for Alice, then OneDrive will also fail the request. ++| GET /drives/{id}/files/{id} | Client app granted `Files.Read` scope for Alice | Client app not granted `Files.Read` scope for Alice | +| -- | -- | -- | +| The document is in AliceΓÇÖs OneDrive. | 200 ΓÇô Access granted. | 403 - Unauthorized. Alice (or her admin) hasnΓÇÖt allowed this client to read her files. | +| The document is in another userΓÇÖs OneDrive*. | 403 - Unauthorized. Alice doesnΓÇÖt have rights to read this file. Even though the client has been granted `Files.Read` it should be denied when acting on AliceΓÇÖs behalf. | 403 ΓÇô Unauthorized. Alice doesnΓÇÖt have rights to read this file, and the client isnΓÇÖt allowed to read files she has access to either. | ++The example given is simplified to illustrate delegated authorization. The production OneDrive service supports many other access scenarios, such as shared files. ++## Next steps ++- [Open connect scopes](scopes-oidc.md) +- [RBAC roles](custom-rbac-for-developers.md) +- [Microsoft Graph permissions reference](/graph/permissions-reference) |
active-directory | Howto Add App Roles In Azure Ad Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md | Role-based access control (RBAC) is a popular mechanism to enforce authorization By using RBAC with application role and role claims, developers can securely enforce authorization in their apps with less effort. -Another approach is to use Azure Active Directory (Azure AD) groups and group claims as shown in the [active-directory-aspnetcore-webapp-openidconnect-v2](https://aka.ms/groupssample) code sample on GitHub. Azure AD groups and application roles aren't mutually exclusive; they can be used in tandem to provide even finer-grained access control. +Another approach is to use Azure Active Directory (Azure AD) groups and group claims as shown in the [active-directory-aspnetcore-webapp-openidconnect-v2](https://aka.ms/groupssample) code sample on GitHub. Azure AD groups and application roles aren't mutually exclusive; they can be used together to provide even finer-grained access control. ## Declare roles for an application -You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted individually to the user and the user's group memberships. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md). +You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md). Currently, if you add a service principal to a group, and then assign an app role to that group, Azure AD doesn't add the `roles` claim to tokens it issues. -App roles are declared using the app roles by using [App roles UI](#app-roles-ui) in the Azure portal: +App roles are declared using App roles UI in the Azure portal: The number of roles you add counts toward application manifest limits enforced by Azure AD. For information about these limits, see the [Manifest limits](./reference-app-manifest.md#manifest-limits) section of [Azure Active Directory app manifest reference](reference-app-manifest.md). The **Status** column should reflect that consent has been **Granted for \<tenan If you're implementing app role business logic that signs in the users in your application scenario, first define the app roles in **App registrations**. Then, an admin assigns them to users and groups in the **Enterprise applications** pane. These assigned app roles are included with any token that's issued for your application, either access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user. -If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an access token to call the API, a roles claim is included in the access token. Your next step is to add code to your web API to check for those roles when the API is called. +If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an ID token to call the API, a roles claim is included in the ID token. Your next step is to add code to your web API to check for those roles when the API is called. To learn how to add authorization to your web API, see [Protected web API: Verify scopes and app roles](scenario-protected-web-api-verification-scope-app-roles.md). |
active-directory | Msal Logging Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-dotnet.md | +- `IIdentityLogger` is the logging implementation used by MSAL.NET to produce logs for debugging or health check purposes. Logs are only sent if logging is enabled. - `Level` enables you to decide which level of logging you want. Setting it to Errors will only get errors-- `PiiLoggingEnabled` enables you to log personal and organizational data (PII) if set to true. By default, this is set to false, so that your application doesn't log personal data.+- `PiiLoggingEnabled` enables you to log personal and organizational data (PII) if set to true. By default, this parameter is set to false, so that your application doesn't log personal data. - `LogCallback` is set to a delegate that does the logging. If `PiiLoggingEnabled` is true, this method will receive messages that can have PII, in which case the `containsPii` flag will be set to true. - `DefaultLoggingEnabled` enables the default logging for the platform. By default it's false. If you set it to true it uses Event Tracing in Desktop/UWP applications, NSLog on iOS and logcat on Android. -```csharp -class Program +### IIdentityLogger Interface +```CSharp +namespace Microsoft.IdentityModel.Abstractions {- private static void Log(LogLevel level, string message, bool containsPii) - { - if (containsPii) - { - Console.ForegroundColor = ConsoleColor.Red; - } - Console.WriteLine($"{level} {message}"); - Console.ResetColor(); - } -- static void Main(string[] args) - { - var scopes = new string[] { "User.Read" }; -- var application = PublicClientApplicationBuilder.Create("<clientID>") - .WithLogging(Log, LogLevel.Info, true) - .Build(); -- AuthenticationResult result = application.AcquireTokenInteractive(scopes) - .ExecuteAsync().Result; - } + public interface IIdentityLogger + { + // + // Summary: + // Checks to see if logging is enabled at given eventLogLevel. + // + // Parameters: + // eventLogLevel: + // Log level of a message. + bool IsEnabled(EventLogLevel eventLogLevel); ++ // + // Summary: + // Writes a log entry. + // + // Parameters: + // entry: + // Defines a structured message to be logged at the provided Microsoft.IdentityModel.Abstractions.LogEntry.EventLogLevel. + void Log(LogEntry entry); + } } ``` +> [!NOTE] +> Partner libraries (`Microsoft.Identity.Web`, `Microsoft.IdentityModel`) provide implementations of this interface already for various environments (in particular ASP.NET Core) ++### IIdentityLogger Implementation ++The following code snippets are examples of such an implementation. If you use the .NET core configuration, environment variable driven logs levels can be provided for free, in addition to the configuration file based log levels. ++#### Log level from configuration file ++It's highly recommended to configure your code to use a configuration file in your environment to set the log level as it will enable your code to change the MSAL logging level without needing to rebuild or restart the application. This is critical for diagnostic purposes, enabling us to quickly gather the required logs from the application that is currently deployed and in production. Verbose logging can be costly so it's best to use the *Information* level by default and enable verbose logging when an issue is encountered. [See JSON configuration provider](https://docs.microsoft.com/aspnet/core/fundamentals/configuration#json-configuration-provider) for an example on how to load data from a configuration file without restarting the application. ++#### Log Level as Environment Variable ++Another option we recommended is to configure your code to use an environment variable on the machine to set the log level as it will enable your code to change the MSAL logging level without needing to rebuild the application. This is critical for diagnostic purposes, enabling us to quickly gather the required logs from the application that is currently deployed and in production. ++See [EventLogLevel](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/blob/dev/src/Microsoft.IdentityModel.Abstractions/EventLogLevel.cs) for details on the available log levels. ++Example: ++```CSharp + class MyIdentityLogger : IIdentityLogger + { + public EventLogLevel MinLogLevel { get; } ++ public TestIdentityLogger() + { + //Try to pull the log level from an environment variable + var msalEnvLogLevel = Environment.GetEnvironmentVariable("MSAL_LOG_LEVEL"); ++ if (Enum.TryParse(msalEnvLogLevel, out EventLogLevel msalLogLevel)) + { + MinLogLevel = msalLogLevel; + } + else + { + //Recommended default log level + MinLogLevel = EventLogLevel.Informational; + } + } ++ public bool IsEnabled(EventLogLevel eventLogLevel) + { + return eventLogLevel <= MinLogLevel; + } ++ public void Log(LogEntry entry) + { + //Log Message here: + Console.WriteLine(entry.message); + } + } +``` ++Using `MyIdentityLogger`: +```CSharp + MyIdentityLogger myLogger = new MyIdentityLogger(logLevel); ++ var app = ConfidentialClientApplicationBuilder + .Create(TestConstants.ClientId) + .WithClientSecret("secret") + .WithExperimentalFeatures() //Currently an experimental feature, will be removed soon + .WithLogging(myLogger, piiLogging) + .Build(); +``` + > [!TIP] > See the [MSAL.NET wiki](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki) for samples of MSAL.NET logging and more. |
active-directory | Permissions Consent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md | Title: Overview of permissions and consent in the Microsoft identity platform -description: Learn about the foundational concepts and scenarios around consent and permissions in the Microsoft identity platform +description: Learn the foundational concepts and scenarios around consent and permissions in the Microsoft identity platform To _access_ a protected resource like email or calendar data, your application n ## Access scenarios -As an application developer, you must identify how your application will access data. The application can use delegated access, acting on behalf of a signed-in user, or direct access, acting only as the application's own identity. +As an application developer, you must identify how your application will access data. The application can use delegated access, acting on behalf of a signed-in user, or app-only access, acting only as the application's own identity.  ### Delegated access (access on behalf of a user) -In this access scenario, a user has signed into a client application. The client application accesses the resource on behalf of the user. Delegated access requires delegated permissions. Both the client and the user must be authorized separately to make the request. +In this access scenario, a user has signed into a client application. The client application accesses the resource on behalf of the user. Delegated access requires delegated permissions. Both the client and the user must be authorized separately to make the request. For more information about the delegated access scenario, see [delegated access scenario](delegated-access-primer.md). -For the client app, the correct delegated permissions must be granted. Delegated permissions can also be referred to as scopes. Scopes are permissions of a given resource that the client application exercises on behalf of a user. They're strings that represent what the application wants to do on behalf of the user. For more information about scopes, see [scopes and permissions](v2-permissions-and-consent.md#scopes-and-permissions). +For the client app, the correct delegated permissions must be granted. Delegated permissions can also be referred to as scopes. Scopes are permissions for a given resource that represent what a client application can access on behalf of the user.For more information about scopes, see [scopes and permissions](v2-permissions-and-consent.md#scopes-and-permissions). -For the user, the authorization relies on the privileges that the user has been granted for them to access the resource. For example, the user could be authorized to access directory resources by [Azure Active Directory (Azure AD) role-based access control (RBAC)](../roles/custom-overview.md) or to access mail and calendar resources by [Exchange Online RBAC](/exchange/permissions-exo/permissions-exo). +For the user, the authorization relies on the privileges that the user has been granted for them to access the resource. For example, the user could be authorized to access directory resources by [Azure Active Directory (Azure AD) role-based access control (RBAC)](../roles/custom-overview.md) or to access mail and calendar resources by Exchange Online RBAC. For more information on RBAC for applications, see [RBAC for applications](custom-rbac-for-developers.md). -### Direct access (App-only access) +### App-only access (Access without a user) In this access scenario, the application acts on its own with no user signed in. Application access is used in scenarios such as automation, and backup. This scenario includes apps that run as background services or daemons. It's appropriate when it's undesirable to have a specific user signed in, or when the data required can't be scoped to a single user. -Direct access may require application permissions but this isn't the only way for granting an application direct access. Application permissions can be referred to as app roles. When app roles are granted to other applications, they can be called applications permissions. The appropriate application permissions or app roles must be granted to the application for it to access the resource. For more information about assigning app roles to applications, see [App roles for applications](howto-add-app-roles-in-azure-ad-apps.md). +App-only access uses app roles instead of delegated scopes. When granted through consent, app roles may also be called applications permissions. For app-only access, the client app must be granted appropriate app roles of the resource app it's calling in order to access the requested data. For more information about assigning app roles to client applications, see [Assigning app roles to applications](howto-add-app-roles-in-azure-ad-apps.md#assign-app-roles-to-applications). ## Types of permissions -**Delegated permissions** are used in the delegated access scenario. They're permissions that allow the application to act on a user's behalf. The application will never be able to access anything users themselves couldn't access. +**Delegated permissions** are used in the delegated access scenario. They're permissions that allow the application to act on a user's behalf. The application will never be able to access anything the signed in user themselves couldn't access. For example, imagine an application that has been granted the Files.Read.All delegated permission on behalf of Tom, the user. The application will only be able to read files that Tom can personally access. -**Application permissions** are used in the direct access scenario, without a signed-in user present. The application will be able to access any data that the permission is associated with. For example, an application granted the Files.Read.All application permission will be able to read any file in the tenant. Only an administrator or owner of the service principal can consent to application permissions. +**Application permissions**, sometimes called app roles are used in the app-only access scenario, without a signed-in user present. The application will be able to access any data that the permission is associated with. For example, an application granted the Files.Read.All application permission will be able to read any file in the tenant. Only an administrator or owner of the service principal can consent to application permissions. ++There are other ways in which applications can be granted authorization for app-only access. For example, an application can be assigned an Azure AD RBAC role. -There are other ways in which applications can be granted authorization for direct access. For example, an application can be assigned an Azure AD RBAC role. +### Comparison of delegated and application permissions ++| <!-- No header--> | Delegated permissions | Application permissions | +|--|--|--| +| Types of apps | Web / Mobile / single-page app (SPA) | Web / Daemon | +| Access context | Get access on behalf of a user | Get access without a user | +| Who can consent | - Users can consent for their data <br> - Admins can consent for all users | Only admin can consent | +| Other names | - Scopes <br> - OAuth2 permission scopes | - App roles <br> - App-only permissions | +| Result of consent (specific to Microsoft Graph) | [oAuth2PermissionGrant](/graph/api/resources/oauth2permissiongrant) | [appRoleAssignment](/graph/api/resources/approleassignment) | ## Consent-One way that applications are granted permissions is through consent. Consent is a process where users or admins authorize an application to access a protected resource. For example, when a user attempts to sign into an application for the first time, the application can request permission to see the user's profile and read the contents of the user's mailbox. The user sees the list of permissions the app is requesting through a consent prompt. +One way that applications are granted permissions is through consent. Consent is a process where users or admins authorize an application to access a protected resource. For example, when a user attempts to sign into an application for the first time, the application can request permission to see the user's profile and read the contents of the user's mailbox. The user sees the list of permissions the app is requesting through a consent prompt. Other scenarios where users may see a consent prompt include: ++- When previously granted consent is revoked. +- When the application is coded to specifically prompt for consent during every sign-in. +- When the application uses incremental or dynamic consent to ask for some permissions upfront and more permission later as needed. The key details of a consent prompt are the list of permissions the application requires and the publisher information. For more information about the consent prompt and the consent experience for both admins and end-users, see [application consent experience](application-consent-experience.md). ### User consent -User consent happens when a user attempts to sign into an application. The user provides their sign-in credentials. These credentials are checked to determine whether consent has already been granted. If no previous record of user or admin consent for the required permissions exists, the user is shown a consent prompt and asked to grant the application the requested permissions. In many cases, an admin may be required to grant consent on behalf of the user. +User consent happens when a user attempts to sign into an application. The user provides their sign-in credentials. These credentials are checked to determine whether consent has already been granted. If no previous record of user or admin consent for the required permissions exists, the user is shown a consent prompt, and asked to grant the application the requested permissions. In many cases, an admin may be required to grant consent on behalf of the user. ### Administrator consent -Depending on the permissions they require, some applications might require an administrator to be the one who grants consent. For example, application permissions can only be consented to by an administrator. Administrators can grant consent for themselves or for the entire organization. For more information about user and admin consent, see [user and admin consent overview](../manage-apps/consent-and-permissions-overview.md) +Depending on the permissions they require, some applications might require an administrator to be the one who grants consent. For example, application permissions and many high-privilege delegated permissions can only be consented to by an administrator. Administrators can grant consent for themselves or for the entire organization. For more information about user and admin consent, see [user and admin consent overview](../manage-apps/consent-and-permissions-overview.md). ### Preauthorization Preauthorization allows a resource application owner to grant permissions without requiring users to see a consent prompt for the same set of permissions that have been preauthorized. This way, an application that has been preauthorized won't ask users to consent to permissions. Resource owners can preauthorize client apps in the Azure portal or by using PowerShell and APIs, like Microsoft Graph. ## Next steps+- [Delegated access scenario](delegated-access-primer.md) - [User and admin consent overview](../manage-apps/consent-and-permissions-overview.md)-- [Scopes and permissions](v2-permissions-and-consent.md)+- [OpenID connect scopes](scopes-oidc.md) |
active-directory | Scenario Spa Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-acquire-token.md | -The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a valid token exists and returns it. When no valid token is in the cache, it attempts to use its refresh token to get the token. If the refresh token's 24-hour lifetime has expired, MSAL.js will open a hidden iframe to silently request a new authorization code, which it will exchange for a new, valid refresh token. For more information about single sign-on (SSO) session and token lifetime values in Azure Active Directory (Azure AD), see [Token lifetimes](active-directory-configurable-token-lifetimes.md). +The pattern for acquiring tokens for APIs with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) is to first attempt a silent token request by using the `acquireTokenSilent` method. When this method is called, the library first checks the cache in browser storage to see if a non-expired access token exists and returns it. If no access token is found for the given parameters, it will throw an `InteractionRequiredAuthError`, which should be handled with an interactive token request method (`acquireTokenPopup` or `acquireTokenRedirect`). If an access token is found but it's expired, it attempts to use its refresh token to get a fresh access token. If the refresh token's 24-hour lifetime has also expired, MSAL.js will open a hidden iframe to silently request a new authorization code by leveraging the existing active session with Azure AD (if any), which will then be exchanged for a fresh set of tokens (access _and_ refresh tokens). For more information about single sign-on (SSO) session and token lifetime values in Azure AD, see [Token lifetimes](active-directory-configurable-token-lifetimes.md). For more information on MSAL.js cache lookup policy, see: [Acquiring an Access Token](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/acquire-token.md#acquiring-an-access-token). The silent token requests to Azure AD might fail for reasons like a password change or updated conditional access policies. More often, failures are due to the refresh token's 24-hour lifetime expiring and [the browser blocking third party cookies](reference-third-party-cookies-spas.md), which prevents the use of hidden iframes to continue authenticating the user. In these cases, you should invoke one of the interactive methods (which may prompt the user) to acquire tokens: |
active-directory | Scopes Oidc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scopes-oidc.md | + + Title: Microsoft identity platform scopes and permissions +description: Learn about openID connect scopes and permissions in the Microsoft identity platform endpoint. ++++++++ Last updated : 11/01/2022++++# Scopes and permissions in the Microsoft identity platform ++The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-protocols.md) authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or *application ID URI*. ++In this article, you'll learn about scopes and permissions in the identity platform. ++The following list shows are some examples of Microsoft web-hosted resources: ++- Microsoft Graph: `https://graph.microsoft.com` +- Microsoft 365 Mail API: `https://outlook.office.com` +- Azure Key Vault: `https://vault.azure.net` ++The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources also can define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. As an example, [Microsoft Graph](https://graph.microsoft.com) has defined permissions to do the following tasks, among others: ++- Read a user's calendar +- Write to a user's calendar +- Send mail as a user ++Because of these types of permission definitions, the resource has fine-grained control over its data and how API functionality is exposed. A third-party app can request these permissions from users and administrators, who must approve the request before the app can access data or act on a user's behalf. ++When a resource's functionality is chunked into small permission sets, third-party apps can be built to request only the permissions that they need to perform their function. Users and administrators can know what data the app can access. And they can be more confident that the app isn't behaving with malicious intent. Developers should always abide by the principle of least privilege, asking for only the permissions they need for their applications to function. ++In OAuth 2.0, these types of permission sets are called *scopes*. They're also often referred to as *permissions*. In the Microsoft identity platform, a permission is represented as a string value. An app requests the permissions it needs by specifying the permission in the `scope` query parameter. Identity platform supports several well-defined [OpenID Connect scopes](#openid-connect-scopes) and resource-based permissions (each permission is indicated by appending the permission value to the resource's identifier or application ID URI). For example, the permission string `https://graph.microsoft.com/Calendars.Read` is used to request permission to read users calendars in Microsoft Graph. ++In requests to the authorization server, for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`. ++## Admin-restricted permissions ++Permissions in the Microsoft identity platform can be set to admin restricted. For example, many higher-privilege Microsoft Graph permissions require admin approval. If your app requires admin-restricted permissions, an organization's administrator must consent to those scopes on behalf of the organization's users. The following section gives examples of these kinds of permissions: ++- Read all user's full profiles by using `User.Read.All` +- Write data to an organization's directory by using `Directory.ReadWrite.All` +- Read all groups in an organization's directory by using `Groups.Read.All` ++> [!NOTE] +>In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`. ++Although a consumer user might grant an application access to this kind of data, organizational users can't grant access to the same set of sensitive company data. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions. ++If the application requests application permissions and an administrator grants these permissions this grant isn't done on behalf of any specific user. Instead, the client application is granted permissions *directly*. These types of permissions should only be used by daemon services and other non-interactive applications that run in the background. For more information on the direct access scenario, see [Access scenarios in the Microsoft identity platform](permissions-consent-overview.md). ++For a step by step guide on how to expose scopes in a web API, see [Configure an application to expose a web API](quickstart-configure-app-expose-web-apis.md). ++## OpenID Connect scopes ++The Microsoft identity platform implementation of OpenID Connect has a few well-defined scopes that are also hosted on Microsoft Graph: `openid`, `email`, `profile`, and `offline_access`. The `address` and `phone` OpenID Connect scopes aren't supported. ++If you request the OpenID Connect scopes and a token, you'll get a token to call the [UserInfo endpoint](userinfo.md). ++### openid ++If an app signs in by using [OpenID Connect](active-directory-v2-protocols.md), it must request the `openid` scope. The `openid` scope appears on the work account consent page as the **Sign you in** permission. ++By using this permission, an app can receive a unique identifier for the user in the form of the `sub` claim. The permission also gives the app access to the UserInfo endpoint. The `openid` scope can be used at the Microsoft identity platform token endpoint to acquire ID tokens. The app can use these tokens for authentication. ++### email ++The `email` scope can be used with the `openid` scope and any other scopes. It gives the app access to the user's primary email address in the form of the `email` claim. ++The `email` claim is included in a token only if an email address is associated with the user account, which isn't always the case. If your app uses the `email` scope, the app needs to be able to handle a case in which no `email` claim exists in the token. ++### profile ++The `profile` scope can be used with the `openid` scope and any other scope. It gives the app access to a large amount of information about the user. The information it can access includes, but isn't limited to, the user's given name, surname, preferred username, and object ID. ++For a complete list of the `profile` claims available in the `id_tokens` parameter for a specific user, see the [`id_tokens` reference](id-tokens.md). ++### offline_access ++The [`offline_access` scope](https://openid.net/specs/openid-connect-core-1_0.html#OfflineAccess) gives your app access to resources on behalf of the user for an extended time. On the consent page, this scope appears as the **Maintain access to data you have given it access to** permission. ++When a user approves the `offline_access` scope, your app can receive refresh tokens from the Microsoft identity platform token endpoint. Refresh tokens are long-lived. Your app can get new access tokens as older ones expire. ++> [!NOTE] +> This permission currently appears on all consent pages, even for flows that don't provide a refresh token (such as the [implicit flow](v2-oauth2-implicit-grant-flow.md)). This setup addresses scenarios where a client can begin within the implicit flow and then move to the code flow where a refresh token is expected. ++On the Microsoft identity platform (requests made to the v2.0 endpoint), your app must explicitly request the `offline_access` scope, to receive refresh tokens. So when you redeem an authorization code in the [OAuth 2.0 authorization code flow](active-directory-v2-protocols.md), you'll receive only an access token from the `/token` endpoint. ++The access token is valid for a short time. It usually expires in one hour. At that point, your app needs to redirect the user back to the `/authorize` endpoint to get a new authorization code. During this redirect, depending on the type of app, the user might need to enter their credentials again or consent again to permissions. ++For more information about how to get and use refresh tokens, see the [Microsoft identity platform protocol reference](active-directory-v2-protocols.md). ++## The .default scope ++The `.default` scope is used to refer generically to a resource service (API) in a request, without identifying specific permissions. If consent is necessary, using `.default` signals that consent should be prompted for all required permissions listed in the application registration (for all APIs in the list). ++The scope parameter value is constructed by using the identifier URI for the resource and `.default`, separated by a forward slash (`/`). For example, if the resource's identifier URI is `https://contoso.com`, the scope to request is `https://contoso.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default). ++Using `scope={resource-identifier}/.default` is functionally the same as `resource={resource-identifier}` on the v1.0 endpoint (where `{resource-identifier}` is the identifier URI for the API, for example `https://graph.microsoft.com` for Microsoft Graph). ++The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). Its use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md). ++Clients can't combine static (`.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default Mail.Read` results in an error because it combines scope types. ++### .default when the user has already given consent ++The `.default` scope parameter only triggers a consent prompt if consent hasn't been granted for any delegated permission between the client and the resource, on behalf of the signed-in user. ++If consent exists, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list. ++For example, if the scope `https://graph.microsoft.com/.default` is requested, your application is requesting an access token for the Microsoft Graph API. If at least one delegated permission has been granted for Microsoft Graph on behalf of the signed-in user, the sign-in will continue and all Microsoft Graph delegated permissions that have been granted for that user will be included in the access token. If no permissions have been granted for the requested resource (Microsoft Graph, in this example), then a consent prompt will be presented for all required permissions configured on the application, for all APIs in the list. ++#### Example 1: The user, or tenant admin, has granted permissions ++In this example, the user or a tenant administrator has granted the `Mail.Read` and `User.Read` Microsoft Graph permissions to the client. ++If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `Mail.Read` and `User.Read`. ++#### Example 2: The user hasn't granted permissions between the client and the resource ++In this example, the user hasn't granted consent between the client and Microsoft Graph, nor has an administrator. The client has registered for the permissions `User.Read` and `Contacts.Read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`. ++When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the Microsoft Graph `User.Read` and `Contacts.Read` scopes, and for the Azure Key Vault `user_impersonation` scope. The returned token contains only the `User.Read` and `Contacts.Read` scopes, and it can be used only against Microsoft Graph. ++#### Example 3: The user has consented, and the client requests more scopes ++In this example, the user has already consented to `Mail.Read` for the client. The client has registered for the `Contacts.Read` scope. ++The client first performs a sign-in with `scope=https://graph.microsoft.com/.default`. Based on the `scopes` parameter of the response, the application's code detects that only `Mail.Read` has been granted. The client then initiates a second sign-in using `scope=https://graph.microsoft.com/.default`, and this time forces consent using `prompt=consent`. If the user is allowed to consent for all the permissions that the application registered, they'll be shown the consent prompt. (If not, they'll be shown an error message or the [admin consent request](../manage-apps/configure-admin-consent-workflow.md) form.) Both `Contacts.Read` and `Mail.Read` will be in the consent prompt. If consent is granted and the sign-in continues, the token returned is for Microsoft Graph, and contains `Mail.Read` and `Contacts.Read`. ++### Using the .default scope with the client ++In some cases, a client can request its own `.default` scope. The following example demonstrates this scenario. ++The scenario accommodates some legacy clients that are moving from Azure AD Authentication Library (ADAL) to the Microsoft Authentication Library (MSAL). This setup *shouldn't* be used by new clients that target the Microsoft identity platform. +++```http +// Line breaks are for legibility only. ++GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize + ?response_type=token //Code or a hybrid flow is also possible here + &client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5 + &scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default + &redirect_uri=https%3A%2F%2Flocalhost + &state=1234 +``` ++This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token. ++### Client credentials grant flow and .default ++Another use of `.default` is to request app roles (also known as application permissions) in a non-interactive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API. ++To define app roles (application permissions) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md). ++Client credentials requests in your client service *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call, and wishes to obtain an access token for. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the app roles (application permissions) that have been granted for that web API are included in the returned access token. ++To grant access to the app roles you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md). ++### Trailing slash and .default ++Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again. ++## Next steps ++- [Requesting permissions through consent in the identity platform](consent-types-developer.md) +- [ID tokens in the Microsoft identity platform](id-tokens.md) +- [Access tokens in the Microsoft identity platform](access-tokens.md) |
active-directory | Secure Least Privileged Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/secure-least-privileged-access.md | A reducible permission is a permission that has a lower-privileged counterpart t ## Use consent to control access to data -Most applications require access to protected data, and the owner of that data needs to [consent](application-consent-experience.md#consent-and-permissions) to that access. Consent can be granted in several ways, including by a tenant administrator who can consent for *all* users in an Azure AD tenant, or by the application users themselves who can grant access. +Most applications require access to protected data, and the owner of that data needs to [consent](consent-types-developer.md) to that access. Consent can be granted in several ways, including by a tenant administrator who can consent for *all* users in an Azure AD tenant, or by the application users themselves who can grant access. Whenever an application that runs in a device requests access to protected data, the application should ask for the consent of the user before granting access to the protected data. The user is required to grant (or deny) consent for the requested permission before the application can progress. |
active-directory | Userinfo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/userinfo.md | The UserInfo endpoint is typically called automatically by [OIDC-compliant libra ## Consider using an ID token instead -The information in an ID token is a superset of the information available on UserInfo endpoint. Because you can get an ID token at the same time you get a token to call the UserInfo endpoint, we suggest getting the user's information from the token instead calling the UserInfo endpoint. Using the ID token instead of calling the UserInfo endpoint eliminates up to two network requests, reducing latency in your application. +The information in an ID token is a superset of the information available on UserInfo endpoint. Because you can get an ID token at the same time you get a token to call the UserInfo endpoint, we suggest getting the user's information from the token instead of calling the UserInfo endpoint. Using the ID token instead of calling the UserInfo endpoint eliminates up to two network requests, reducing latency in your application. If you require more details about the user like manager or job title, call the [Microsoft Graph `/user` API](/graph/api/user-get). You can also use [optional claims](active-directory-optional-claims.md) to include additional user information in your ID and access tokens. |
active-directory | V2 Permissions And Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-permissions-and-consent.md | Applications in Microsoft identity platform rely on consent in order to gain acc In the static user consent scenario, you must specify all the permissions it needs in the app's configuration in the Azure portal. If the user (or administrator, as appropriate) has not granted consent for this app, then Microsoft identity platform will prompt the user to provide consent at this time. -Static permissions also enables administrators to [consent on behalf of all users](#requesting-consent-for-an-entire-tenant) in the organization. +Static permissions also enable administrators to [consent on behalf of all users](#requesting-consent-for-an-entire-tenant) in the organization. While static permissions of the app defined in the Azure portal keep the code nice and simple, it presents some possible issues for developers: After the user enters their credentials, the Microsoft identity platform checks At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `User.Read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `User.Read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information. - When the user approves the permission request, consent is recorded. The user doesn't have to consent again when they later sign in to the application. The scope parameter value is constructed by using the identifier URI for the res Using `scope={resource-identifier}/.default` is functionally the same as `resource={resource-identifier}` on the v1.0 endpoint (where `{resource-identifier}` is the identifier URI for the API, for example `https://graph.microsoft.com` for Microsoft Graph). -The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). It's use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md). +The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). Its use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md). Clients can't combine static (`.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default Mail.Read` results in an error because it combines scope types. Clients can't combine static (`.default`) consent and dynamic consent in a singl The `.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `.default` triggers a consent prompt only if consent has not been granted for any delegated permission between the client and the resource, on behalf of the signed-in user. -If consent does exists, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list. +If consent does exist, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list. For example, if the scope `https://graph.microsoft.com/.default` is requested, your application is requesting an access token for the Microsoft Graph API. If at least one delegated permission has been granted for Microsoft Graph on behalf of the signed-in user, the sign-in will continue and all Microsoft Graph delegated permissions which have been granted for that user will be included in the access token. If no permissions have been granted for the requested resource (Microsoft Graph, in this example), then a consent prompt will be presented for all required permissions configured on the application, for all APIs in the list. |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md | +## October 2022 ++### Updated articles ++- [Access Azure AD protected resources from an app in Google Cloud](workload-identity-federation-create-trust-gcp.md) +- [Configure an app to trust an external identity provider](workload-identity-federation-create-trust.md) +- [Configure a user-assigned managed identity to trust an external identity provider (preview)](workload-identity-federation-create-trust-user-assigned-managed-identity.md) +- [Configuration requirements and troubleshooting tips for Xamarin Android with MSAL.NET](msal-net-xamarin-android-considerations.md) +- [Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md) +- [Desktop app that calls web APIs: Acquire a token using Device Code flow](scenario-desktop-acquire-token-device-code-flow.md) +- [Desktop app that calls web APIs: Acquire a token using integrated Windows authentication](scenario-desktop-acquire-token-integrated-windows-authentication.md) +- [Desktop app that calls web APIs: Acquire a token using Username and Password](scenario-desktop-acquire-token-username-password.md) +- [Making your application multi-tenant](howto-convert-app-to-be-multi-tenant.md) +- [Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) +- [Prompt behavior with MSAL.js](msal-js-prompt-behavior.md) +- [Quickstart: Register an application with the Microsoft identity platform](quickstart-register-app.md) +- [Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript single-page application](tutorial-v2-javascript-spa.md) +- [Tutorial: Sign in users and call the Microsoft Graph API from a React single-page app (SPA) using auth code flow](tutorial-v2-react.md) + ## September 2022 ### New articles Welcome to what's new in the Microsoft identity platform documentation. This art - [Protected web API: Code configuration](scenario-protected-web-api-app-configuration.md) - [Provide optional claims to your app](active-directory-optional-claims.md) - [Using directory extension attributes in claims](active-directory-schema-extensions.md)--## July 2022 --### New articles --- [Configure SAML app multi-instancing for an application in Azure Active Directory](reference-app-multi-instancing.md)--### Updated articles --- [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md)-- [Application configuration options](msal-client-application-configuration.md)-- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)-- [Claims mapping policy type](reference-claims-mapping-policy-type.md)-- [Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- [Microsoft identity platform access tokens](access-tokens.md)-- [Single-page application: Sign-in and Sign-out](scenario-spa-sign-in.md)-- [Tutorial: Add sign-in to Microsoft to an ASP.NET web app](tutorial-v2-asp-webapp.md) |
active-directory | Howto Hybrid Azure Ad Join | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md | Hybrid Azure AD join requires devices to have access to the following Microsoft - Your organization's Security Token Service (STS) (**For federated domains**) > [!WARNING]-> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to `https://devices.login.microsoftonline.com` is excluded from TLS break-and-inspect. Failure to exclude this URL may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access. +> If your organization uses proxy servers that intercept SSL traffic for scenarios like data loss prevention or Azure AD tenant restrictions, ensure that traffic to `https://device.login.microsoftonline.com` is excluded from TLS break-and-inspect. Failure to exclude this URL may cause interference with client certificate authentication, cause issues with device registration, and device-based Conditional Access. If your organization requires access to the internet via an outbound proxy, you can use [Web Proxy Auto-Discovery (WPAD)](/previous-versions/tn-archive/cc995261(v=technet.10)) to enable Windows 10 or newer computers for device registration with Azure AD. To address issues configuring and managing WPAD, see [Troubleshooting Automatic Detection](/previous-versions/tn-archive/cc302643(v=technet.10)). |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md | +## October 2022 ++### Updated articles ++- [Tutorial: Bulk invite Azure AD B2B collaboration users](tutorial-bulk-invite.md) +- [Quickstart: Add a guest user and send an invitation](b2b-quickstart-add-guest-users-portal.md) +- [Define custom attributes for user flows](user-flow-add-custom-attributes.md) +- [Create dynamic groups in Azure Active Directory B2B collaboration](use-dynamic-groups.md) +- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md) +- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md) +- [Leave an organization as an external user](leave-the-organization.md) +- [Azure Active Directory External Identities: What's new](whats-new-docs.md) +- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md) +- [Example: Configure SAML/WS-Fed based identity provider federation with AD FS](direct-federation-adfs.md) +- [The elements of the B2B collaboration invitation email - Azure Active Directory](invitation-email-elements.md) +- [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md) +- [Add Microsoft account (MSA) as an identity provider for External Identities](microsoft-account.md) +- [How users in your organization can invite guest users to an app](add-users-information-worker.md) + ## September 2022 ### Updated articles Welcome to what's new in Azure Active Directory External Identities documentatio - [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md) - [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md) - [Azure Active Directory External Identities: What's new](whats-new-docs.md)--## July 2022 --### Updated articles --- [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)-- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Azure Active Directory External Identities: What's new](whats-new-docs.md)-- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md)-- [B2B direct connect overview](b2b-direct-connect-overview.md)-- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md) |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md | We recommend that you harden your Azure AD Connect server to decrease the securi - Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.-- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch)+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud managed objects to Azure AD Connect, but it comes with certain security risks. If you do not require it, you should [disable Soft Matching](how-to-connect-syncservice-features.md#blocksoftmatch). +- Disable Hard Match Takeover. Hard match takeover allows Azure AD Connect to take control of a cloud managed object and changing the source of authority for the object to Active Directory. Once the source of authority of an object is taken over by Azure AD Connect, changes made to the Active Directory object that is linked to the Azure AD object will overwrite the original Azure AD data - including the password hash, if Password Hash Sync is enabled. An attacker could use this capability to take over control of cloud managed objects. To mitigate this risk, [disable hard match takeover](https://learn.microsoft.com/powershell/module/msonline/set-msoldirsyncfeature?view=azureadps-1.0#example-3-block-cloud-object-takeover-through-hard-matching-for-the-tenant). ### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). |
active-directory | User Admin Consent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/user-admin-consent-overview.md | + + Title: Overview of user and admin consent +description: Learn about the fundamental concepts of user and admin consent in Azure AD +++++++ Last updated : 09/28/2022+++++# User and admin consent in Azure Active Directory ++In this article, youΓÇÖll learn the foundational concepts and scenarios around user and admin consent in Azure Active Directory (Azure AD). ++Consent is a process where users can grant permission for an application to access a protected resource. To indicate the level of access required, an application requests the API permissions it requires. For example, an application can request the permission to see a signed-in user's profile and read the contents of the user's mailbox. ++Consent can be initiated in various ways. For example, users can be prompted for consent when they attempt to sign in to an application for the first time. Depending on the permissions they require, some applications might require an administrator to be the one who grants consent. ++## User consent ++A user can authorize an application to access some data at the protected resource, while acting as that user. The permissions that allow this type of access are called "delegated permissions." ++User consent is usually initiated when a user signs in to an application. After the user has provided sign-in credentials, they're checked to determine whether consent has already been granted. If no previous record of user or admin consent for the required permissions exists, the user is directed to the consent prompt window to grant the application the requested permissions. ++User consent by non-administrators is possible only in organizations where user consent is allowed for the application and for the set of permissions the application requires. If user consent is disabled, or if users aren't allowed to consent for the requested permissions, they won't be prompted for consent. If users are allowed to consent and they accept the requested permissions, the consent is recorded and they usually don't have to consent again on future sign-ins to the same application. ++### User consent settings ++Users are in control of their data. A Privileged Administrator can configure whether non-administrator users are allowed to grant user consent to an application. This setting can take into account aspects of the application and the application's publisher, and the permissions being requested. ++As an administrator, you can choose whether user consent is allowed. If you choose to allow user consent, you can also choose what conditions must be met before an application can be consented to by a user. ++By choosing which application consent policies apply for all users, you can set limits on when users are allowed to grant consent to applications and on when theyΓÇÖll be required to request administrator review and approval. The Azure portal provides the following built-in options: ++- *You can disable user consent*. Users can't grant permissions to applications. Users continue to sign in to applications they've previously consented to or to applications that administrators have granted consent to on their behalf, but they won't be allowed to consent to new permissions to applications on their own. Only users who have been granted a directory role that includes the permission to grant consent can consent to new applications. ++- *Users can consent to applications from verified publishers or your organization, but only for permissions you select*. All users can consent only to applications that were published by a [verified publisher](../develop/publisher-verification-overview.md) and applications that are registered in your tenant. Users can consent only to the permissions that you've classified as *low impact*. You must [classify permissions](configure-permission-classifications.md) to select which permissions users are allowed to consent to. ++- *Users can consent to all applications*. This option allows all users to consent to any permissions that don't require admin consent, for any application. ++For most organizations, one of the built-in options will be appropriate. Some advanced customers might want more control over the conditions that govern when users are allowed to consent. These customers can [create custom app consent policy](manage-app-consent-policies.md#create-a-custom-app-consent-policy) and configure those policies to apply to user consent. ++## Admin consent ++During admin consent, a Privileged Administrator may grant an application access on behalf of other users (usually, on behalf of the entire organization). Also during admin consent, applications or services provide direct access to an API, which can be used by the application if there's no signed-in user. ++When your organization purchases a license or subscription for a new application, you might proactively want to set up the application so that all users in the organization can use it. To avoid the need for user consent, an administrator can grant consent for the application on behalf of all users in the organization. ++After an administrator grants admin consent on behalf of the organization, users aren't usually prompted for consent for that application. In certain cases, a user might be prompted for consent even after consent was granted by an administrator. An example might be if an application requests another permission that the administrator hasn't already granted. ++Granting admin consent on behalf of an organization is a sensitive operation, potentially allowing the application's publisher access to significant portions of the organization's data, or the permission to do highly privileged operations. Examples of such operations might be role management, full access to all mailboxes or all sites, and full user impersonation. ++Before you grant tenant-wide admin consent, ensure that you trust the application and the application publisher, for the level of access you're granting. If you aren't confident that you understand who controls the application and why the application is requesting the permissions, do *not* grant consent. ++For step-by-step guidance on whether to grant an application admin consent, see [Evaluating a request for tenant-wide admin consent](manage-consent-requests.md#evaluate-a-request-for-tenant-wide-admin-consent). ++For step-by-step instructions for granting tenant-wide admin consent from the Azure portal, see [Grant tenant-wide admin consent to an application](grant-admin-consent.md). ++### Grant consent on behalf of a specific user ++Instead of granting consent for an entire organization, an admin can also use the [Microsoft Graph API](/graph/use-the-api) to grant consent to delegated permissions on behalf of a single user. For a detailed example that uses Microsoft Graph PowerShell, see [Grant consent on behalf of a single user by using PowerShell](manage-consent-requests.md). ++### Limit user access to an application ++User access to applications can still be limited, even when tenant-wide admin consent has been granted. Configure the applicationΓÇÖs properties to require user assignment to limit user access to the application. For more information, see [Methods for assigning users and groups](assign-user-or-group-access-portal.md). ++For a broader overview, including how to handle other complex scenarios, see [Use Azure AD for application access management](what-is-access-management.md). ++## Admin consent workflow ++The admin consent workflow gives users a way to request admin consent for applications when they aren't allowed to consent themselves. When the admin consent workflow is enabled, users are presented with an "Approval required" window for requesting admin approval for access to the application. ++After users submit the admin consent request, the admins who have been designated as reviewers receive a notification. The users are notified after a reviewer has acted on their request. For step-by-step instructions for configuring the admin consent workflow by using the Azure portal, see [configure the admin consent workflow](configure-admin-consent-workflow.md). ++### How users request admin consent ++After the admin consent workflow is enabled, users can request admin approval for an application that they're unauthorized to consent to. Here are the steps in the process: ++1. A user attempts to sign in to the application. +1. An **Approval required** message appears. The user types a justification for needing access to the application and then selects "Request approval." +1. A **Request sent** message confirms that the request was submitted to the admin. If the user sends several requests, only the first request is submitted to the admin. +1. The user receives an email notification when the request is approved, denied, or blocked. ++## Next steps ++- [Configure user consent settings](configure-user-consent.md) +- [Configure the admin consent workflow](configure-admin-consent-workflow.md) |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md | Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 10/03/2022 Last updated : 11/01/2022 +## October 2022 ++### Updated articles ++- [Configure how users consent to applications](configure-user-consent.md) +- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md) +- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md) +- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on](f5-big-ip-ldap-header-easybutton.md) +- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta-to-azure-active-directory.md) +- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort](silverfort-azure-ad-integration.md) + ## September 2022 ### New articles Welcome to what's new in Azure Active Directory (Azure AD) application managemen ### Updated articles - [Hide an enterprise application](hide-application-from-user-portal.md)--## July 2022 --### New articles --- [Create an enterprise application from a multi-tenant application in Azure Active Directory](create-service-principal-cross-tenant.md)-- [Deletion and recovery of applications FAQ](delete-recover-faq.yml)-- [Recover deleted applications in Azure Active Directory FAQs](recover-deleted-apps-faq.md)-- [Restore an enterprise application in Azure AD](restore-application.md)-- [SAML Request Signature Verification (Preview)](howto-enforce-signed-saml-authentication.md)-- [Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access](cloudflare-azure-ad-integration.md)-- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards](datawiza-azure-ad-sso-oracle-jde.md)--### Updated articles --- [Delete an enterprise application](delete-application-portal.md)-- [Configure Azure Active Directory SAML token encryption](howto-saml-token-encryption.md)-- [Review permissions granted to applications](manage-application-permissions.md)-- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza](datawiza-with-azure-ad.md) |
active-directory | How To View Applied Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md | Title: View applied Conditional Access policies in Azure AD sign-in logs -description: Learn how to view Conditional Access policies in Azure AD sign-in logs so that you can assess the impact of those policies. +description: Learn how to view Conditional Access policies in Azure AD sign-in logs so that you can assess the effect of those policies. -+ - Previously updated : 09/14/2022- Last updated : 10/31/2022+ -With Conditional Access policies, you can control how your users get access to the resources of your Azure tenant. As a tenant admin, you need to be able to determine what impact your Conditional Access policies have on sign-ins to your tenant, so that you can take action if necessary. +With Conditional Access policies, you can control how your users get access to the resources of your Azure tenant. As a tenant admin, you need to be able to determine what effect your Conditional Access policies have on sign-ins to your tenant, so that you can take action if necessary. -The sign-in logs in Azure Active Directory (Azure AD) give you the information that you need to assess the impact of your policies. This article explains how to view applied Conditional Access policies in those logs. +The sign-in logs in Azure Active Directory (Azure AD) give you the information that you need to assess the effect of your policies. This article explains how to view applied Conditional Access policies in those logs. ## What you should know Some scenarios require you to get an understanding of how your Conditional Acces - *Helpdesk administrators* who need to look at applied Conditional Access policies to understand if a policy is the root cause of a ticket that a user opened. -- *Tenant administrators* who need to verify that Conditional Access policies have the intended impact on the users of a tenant.+- *Tenant administrators* who need to verify that Conditional Access policies have the intended effect on the users of a tenant. You can access the sign-in logs by using the Azure portal, Microsoft Graph, and PowerShell. To view the sign-in logs, use: `Get-MgAuditLogSignIn` -For more information about this cmdlet, see [Get-MgAuditLogSignIn](https://learn.microsoft.com/powershell/module/microsoft.graph.reports/get-mgauditlogsignin?view=graph-powershell-1.0). +For more information about this cmdlet, see [Get-MgAuditLogSignIn](/powershell/module/microsoft.graph.reports/get-mgauditlogsignin). The Azure AD Graph PowerShell module doesn't support viewing applied Conditional Access policies. Only the Microsoft Graph PowerShell module returns applied Conditional Access policies. |
active-directory | Howto Access Activity Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md | Title: Access activity logs in Azure AD | Microsoft Docs description: Learn how to choose the right method for accessing the activity logs in Azure AD. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ |
active-directory | Howto Analyze Activity Logs Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md | Title: Analyze activity logs using Azure Monitor logs | Microsoft Docs description: Learn how to analyze Azure Active Directory activity logs using Azure Monitor logs -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ To follow along, you need: * A [Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md) in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). * First, complete the steps to [route the Azure AD activity logs to your Log Analytics workspace](howto-integrate-activity-logs-with-log-analytics.md). * [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace-* The following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal) +* The following roles in Azure Active Directory (if you're accessing Log Analytics through Azure Active Directory portal) - Security Admin - Security Reader - Report Reader You can also set up alerts on your query. For example, to configure an alert whe 4. Select the **Action Group** that will be alerted when the signal occurs. You can choose to notify your team via email or text message, or you could automate the action using webhooks, Azure functions or logic apps. Learn more about [creating and managing alert groups in the Azure portal](../../azure-monitor/alerts/action-groups.md). -5. Once you have configured the alert, select **Create alert** to enable it. +5. Once you've configured the alert, select **Create alert** to enable it. ## Use pre-built workbooks for Azure AD activity logs The workbooks provide several reports related to common scenarios involving audit, sign-in, and provisioning events. You can also alert on any of the data provided in the reports, using the steps described in the previous section. -* **Provisioning analysis**: This [workbook](../app-provisioning/application-provisioning-log-analytics.md) shows reports related to auditing provisioning activity, such as the number of new users provisioned and provisioning failures, number of users updated and update failures and the number of users de-provisioned and corresponding failures. -* **Sign-ins Events**: This workbook shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, as well as a summary view tracking the number of sign-ins over time. -* **Conditional access insights**: The Conditional Access insights and reporting [workbook](../conditional-access/howto-conditional-access-insights-reporting.md) enables you to understand the impact of Conditional Access policies in your organization over time. +* **Provisioning analysis**: This [workbook](../app-provisioning/application-provisioning-log-analytics.md) shows reports related to auditing provisioning activity. Activities can include the number of new users provisioned, provisioning failures, number of users updated, update failures, the number of users de-provisioned, and corresponding failures. +* **Sign-ins Events**: This workbook shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, and a summary view tracking the number of sign-ins over time. +* **Conditional access insights**: The Conditional Access insights and reporting [workbook](../conditional-access/howto-conditional-access-insights-reporting.md) enables you to understand the effect of Conditional Access policies in your organization over time. ## Next steps |
active-directory | Howto Configure Prerequisites For Reporting Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md | Title: Prerequisites for Azure Active Directory reporting API | Microsoft Docs description: Learn about the prerequisites to access the Azure AD reporting API -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ # Prerequisites to access the Azure Active Directory reporting API -The [Azure Active Directory (Azure AD) reporting APIs](./concept-reporting-api.md) provide you with programmatic access to the data through a set of REST-based APIs. You can call these APIs from a number of programming languages and tools. +The [Azure Active Directory (Azure AD) reporting APIs](./concept-reporting-api.md) provide you with programmatic access to the data through a set of REST-based APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs. This section shows you how to get the following settings from your directory: - Client ID - Client secret or certificate -You need these values when configuring calls to the reporting API. We recommend using a certificate because it is more secure. +You need these values when configuring calls to the reporting API. We recommend using a certificate because it's more secure. ### Get your domain name You need these values when configuring calls to the reporting API. We recommend **To get your application's client ID:** -1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, click **Azure Active Directory**. +1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.  You need these values when configuring calls to the reporting API. We recommend **To get your application's client secret:** -1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, click **Azure Active Directory**. +1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.  2. Select your application from the **App Registrations** page. -3. Select **Certificates and Secrets** on the **API Application** page, in the **Client Secrets** section, click **+ New Client Secret**. +3. Select **Certificates and Secrets** on the **API Application** page, in the **Client Secrets** section, select **+ New Client Secret**.  You need these values when configuring calls to the reporting API. We recommend b. As **Expires**, select **In 2 years**. - c. Click **Save**. + c. Select **Save**. d. Copy the key value. If you run into this error message while trying to access sign-ins using Graph E  -### Error: Tenant is not B2C or tenant doesn't have premium license +### Error: Tenant isn't B2C or tenant doesn't have premium license Accessing sign-in reports requires an Azure Active Directory premium 1 (P1) license. If you see this error message while accessing sign-ins, make sure that your tenant is licensed with an Azure AD P1 license. -### Error: The allowed roles does not include User. +### Error: The allowed roles doesn't include User. Avoid errors trying to access audit logs or sign-in using the API. Make sure your account is part of the **Security Reader** or **Report Reader** role in your Azure Active Directory tenant. -### Error: Application missing AAD 'Read directory data' permission +### Error: Application missing Azure AD 'Read directory data' permission ### Error: Application missing Microsoft Graph API 'Read all audit log data' permission |
active-directory | Howto Download Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md | Title: How to download logs in Azure Active Directory | Microsoft Docs description: Learn how to download activity logs in Azure Active Directory. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ |
active-directory | Howto Find Activity Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-find-activity-reports.md | Title: Find user activity reports in Azure portal | Microsoft Docs description: Learn where the Azure Active Directory user activity reports are in the Azure portal. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ In this article, you learn how to find Azure Active Directory (Azure AD) user ac The audit logs report combines several reports around application activities into a single view for context-based reporting. To access the audit logs report: 1. Navigate to the [Azure portal](https://portal.azure.com).-2. Select your directory from the top-right corner, then select the **Azure Active Directory** blade from the left navigation pane. -3. Select **Audit logs** from the **Activity** section of the Azure Active Directory blade. +1. Select **Audit logs** from the **Activity** section of Azure Active Directory.  The **Sign-ins** view includes all user sign-ins, as well as the **Application U To access the sign-ins report: 1. Navigate to the [Azure portal](https://portal.azure.com).-2. Select your directory from the top-right corner, then select the **Azure Active Directory** blade from the left navigation pane. +2. Select your directory from the top-right corner, then select **Azure Active Directory** from the left navigation pane. 3. Select **Signins** from the **Activity** section of the Azure Active Directory blade.  |
active-directory | Howto Install Use Log Analytics Views | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-install-use-log-analytics-views.md | Title: How to install and use the log analytics views | Microsoft Docs description: Learn how to install and use the log analytics views for Azure Active Directory -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ -The Azure Active Directory log analytics views helps you analyze and search the Azure AD activity logs in your Azure AD tenant. Azure AD activity logs include: +The Azure Active Directory (Azure AD) log analytics views helps you analyze and search the Azure AD activity logs in your Azure AD tenant. Azure AD activity logs include: * Audit logs: The [audit logs activity report](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant. * Sign-in logs: With the [sign-in activity report](concept-sign-ins.md), you can determine who performed the tasks that are reported in the audit logs. To use the log analytics views, you need: ## Install the log analytics views -1. Navigate to your Log Analytics workspace. To do this, first navigate to the [Azure portal](https://portal.azure.com) and select **All services**. Type **Log Analytics** in the text box, and select **Log Analytics workspaces**. Select the workspace you routed the activity logs to, as part of the prerequisites. -2. Select **View Designer**, select **Import** and then select **Choose File** to import the views from your local computer. -3. Select the views you downloaded from the prerequisites and select **Save** to save the import. Do this for the **Azure AD Account Provisioning Events** view and the **Sign-ins Events** view. +1. Navigate to the [Azure portal](https://portal.azure.com) and select **All services**. +1. Type **Log Analytics** in the text box, and select **Log Analytics workspaces**. Select the workspace you routed the activity logs to, as part of the prerequisites. +1. Select **View Designer** > **Import** > **Choose File** to import the views from your local computer. +1. Select the views you downloaded from the prerequisites and select **Save** to save the import. Complete this step for the **Azure AD Account Provisioning Events** view and the **Sign-ins Events** view. ## Use the views -1. Navigate to your Log Analytics workspace. To do this, first navigate to the [Azure portal](https://portal.azure.com) and select **All services**. Type **Log Analytics** in the text box, and select **Log Analytics workspaces**. Select the workspace you routed the activity logs to, as part of the prerequisites. +1. Navigate to the [Azure portal](https://portal.azure.com) and select **All services**. +1. Type **Log Analytics** in the text box, and select **Log Analytics workspaces**. Select the workspace you routed the activity logs to, as part of the prerequisites. -2. Once you're in the workspace, select **Workspace Summary**. You should see the following three views: +1. Once you're in the workspace, select **Workspace Summary**. You should see the following three views: - * **Azure AD Account Provisioning Events**: This view shows reports related to auditing provisioning activity, such as the number of new users provisioned and provisioning failures, number of users updated and update failures and the number of users de-provisioned and corresponding failures. - * **Sign-ins Events**: This view shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, as well as a summary view tracking the number of sign-ins over time. + * **Azure AD Account Provisioning Events**: This view shows reports related to the auditing provisioning activity. Activities can include the number of new users provisioned, provisioning failures, number of users updated, update failures, the number of users de-provisioned and their corresponding failures. + * **Sign-ins Events**: This view shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, and a summary view tracking the number of sign-ins over time. -3. Select either of these views to jump in to the individual reports. You can also set alerts on any of the report parameters. For example, let's set an alert for every time there's a sign-in error. To do this, first select the **Sign-ins Events** view, select **Sign-in errors over time** report and then select **Analytics** to open the details page, with the actual query behind the report. +1. Select either of these views to jump in to the individual reports. You can also set alerts on any of the report parameters. For example, let's set an alert for every time there's a sign-in error. +1. Select the **Sign-ins Events** > **Sign-in errors over time** > **Analytics** to open the details page, with the actual query behind the report.  -4. Select **Set Alert**, and then select **Whenever the Custom log search is <logic undefined>** under the **Alert criteria** section. Since we want to alert whenever there's a sign-in error, set the **Threshold** of the default alert logic to **1** and then select **Done**. +1. Select **Set Alert**, and then select **Whenever the Custom log search is <logic undefined>** under the **Alert criteria** section. Since we want to alert whenever there's a sign-in error, set the **Threshold** of the default alert logic to **1** and then select **Done**.  -5. Enter a name and description for the alert and set the severity to **Warning**. +1. Enter a name and description for the alert and set the severity to **Warning**.  -6. Select the action group to alert. In general, this can be either a team you want to notify via email or text message, or it can be an automated task using webhooks, runbooks, functions, logic apps or external ITSM solutions. Learn how to [create and manage action groups in the Azure portal](../../azure-monitor/alerts/action-groups.md). +1. Select the action group to alert, such as a team you want to notify via email or text message. Learn how to [create and manage action groups in the Azure portal](../../azure-monitor/alerts/action-groups.md). -7. Select **Create alert rule** to create the alert. Now you will be alerted every time there's a sign-in error. +1. Select **Create alert rule** to create the alert. Now you'll be alerted every time there's a sign-in error. ## Next steps |
active-directory | Howto Integrate Activity Logs With Arcsight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md | Title: Integrate logs with ArcSight using Azure Monitor | Microsoft Docs description: Learn how to integrate Azure Active Directory logs with ArcSight using Azure Monitor -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ In this article, you learn how to route Azure AD logs to ArcSight using Azure Mo To use this feature, you need: * An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). -* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they are consequently sent to the SmartConnector by the Load Balancer. +* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer. -Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hub](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor. +Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor. ## Integrate Azure AD logs with ArcSight Download and open the [configuration guide for ArcSight SmartConnector for Azure 2. Follow the steps in the **Deploying the Connector** section of configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder. -3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following: +3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites: * The requisite Azure functions are created in your Azure subscription. * The Azure AD logs are streamed to the correct destination. * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps. * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format. -4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform additional configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account. +4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account. -5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There is also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle. +5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle. ## Next steps -[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hub](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292) +[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292) |
active-directory | Howto Integrate Activity Logs With Splunk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md | Title: Integrate Splunk using Azure Monitor | Microsoft Docs description: Learn how to integrate Azure Active Directory logs with Splunk using Azure Monitor. -+ - Previously updated : 08/22/2022- Last updated : 10/31/2022+ |
active-directory | Howto Integrate Activity Logs With Sumologic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md | Title: Stream logs to SumoLogic using Azure Monitor | Microsoft Docs description: Learn how to integrate Azure Active Directory logs with SumoLogic using Azure Monitor. -+ - Previously updated : 08/22/2022- Last updated : 10/31/2022+ |
active-directory | Howto Manage Inactive User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md | Title: How to manage inactive user accounts in Azure AD | Microsoft Docs description: Learn about how to detect and handle user accounts in Azure AD that have become obsolete -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ |
active-directory | Howto Troubleshoot Sign In Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md | Title: How to troubleshoot sign-in errors reports | Microsoft Docs description: Learn how to troubleshoot sign-in errors using Azure Active Directory reports in the Azure portal -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ You need:  -4. Identify the failed sign-in you want to investigate. Select it to open up the additional details window with more information about the failed sign-in. Note down the **Sign-in error code** and **Failure reason**. +4. Identify the failed sign-in you want to investigate. Select it to open up the other details window with more information about the failed sign-in. Note down the **Sign-in error code** and **Failure reason**.  |
active-directory | Howto Use Azure Monitor Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md | Title: Azure Monitor workbooks for reports | Microsoft Docs description: Learn how to use Azure Monitor workbooks for Azure Active Directory reports. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ # How to use Azure Monitor workbooks for Azure Active Directory reports As an IT admin, you need powerful tools to turn the data about your Azure AD ten This article gives you an overview of how you can use Azure Monitor workbooks for Azure Active Directory reports to analyze your Azure AD tenant. -## What it is +## What is Azure Monitor workbooks for Azure AD reports? Azure AD tracks all activities in your Azure AD in the activity logs. The data in your Azure AD logs enables you to assess how your Azure AD is doing. The Azure Active Directory portal gives you access to three activity logs: Azure AD tracks all activities in your Azure AD in the activity logs. The data i Using the access capabilities provided by the Azure portal, you can review the information that is tracked in your activity logs. This option is helpful if you need to do a quick investigation of an event with a limited scope. For example, a user had trouble signing in during a period of a few hours. In this scenario, reviewing the recent records of this user in the sign-in logs can help to shed light on this issue. -For one-off investigations with a limited scope, the Azure portal is often the easiest way to find the data you need. However, there are also business problems requiring a more complex analysis of the data in your activity logs. This is, for example, true if you're watching for trends in signals of interest. One common example for a scenario that requires a trend analysis is related to blocking legacy authentication in your Azure AD tenant. +For one-off investigations with a limited scope, the Azure portal is often the easiest way to find the data you need. However, there are also business problems requiring a more complex analysis of the data in your activity logs. One common example for a scenario that requires a trend analysis is related to blocking legacy authentication in your Azure AD tenant. Azure AD supports several of the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication refers to basic authentication, a widely used industry-standard method for collecting user name and password information. Examples of applications that commonly or only use legacy authentication are: Azure AD supports several of the most widely used authentication and authorizati Typically, legacy authentication clients can't enforce any type of second factor authentication. However, multi-factor authentication (MFA) is a common requirement in many environments to provide a high level of protection. -How can you determine whether it is safe to block legacy authentication in an environment? Answering this question requires an analysis of the sign-ins in your environment for a certain timeframe. This is a scenario where Azure Monitor workbooks can help you. --Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences. +How can you determine whether it's safe to block legacy authentication in an environment? Answering this question requires an analysis of the sign-ins in your environment for a certain timeframe. Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences. With Azure Monitor workbooks, you can: When working with workbooks, you can either start with an empty workbook, or use There are: -- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) that serve as a good starting point when you are just getting started with workbooks.+- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) that serve as a good starting point when you're just getting started with workbooks. - **Private templates** when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant. To use Monitor workbooks, you need: - A [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). - [Access](../../azure-monitor/logs/manage-access.md#azure-rbac) to the log analytics workspace-- Following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal)+- Following roles in Azure Active Directory (if you're accessing Log Analytics through Azure Active Directory portal) - Security administrator - Security reader - Report reader |
active-directory | Reference Audit Activities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md | Title: Azure Active Directory (Azure AD) audit activity reference | Microsoft Docs description: Get an overview of the audit activities that can be logged in your audit logs in Azure Active Directory (Azure AD). -+ - Previously updated : 08/26/2022- Last updated : 10/28/2022+ The reporting architecture in Azure AD consists of the following components: - [Audit logs](concept-audit-logs.md) - Provides traceability through logs for all changes done by various features within Azure AD. - **Security reports** - - [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account. + - [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who isn't the legitimate owner of a user account. - [Users flagged for risk](../identity-protection/overview-identity-protection.md) - A risky user is an indicator for a user account that might have been compromised. -This articles lists the audit activities that can be logged in your audit logs. +This article lists the audit activities that can be logged in your audit logs. ## Access reviews This articles lists the audit activities that can be logged in your audit logs. |Access Reviews|Remove reviewer from access review| |Access Reviews|Request Stop Review| |Access Reviews|Request apply review result|-|Access Reviews|Review Rbac Role membership| +|Access Reviews|Review RBAC Role membership| |Access Reviews|Review app assignment| |Access Reviews|Review group membership| |Access Reviews|Review request approval request| This articles lists the audit activities that can be logged in your audit logs. |Authentication|Create IdentityProvider| |Authentication|Create V1 application| |Authentication|Create V2 application|-|Authentication|Create a custom domains in the tenant| +|Authentication|Create a custom domain in the tenant| |Authorization|Create a new AdminUserJourney| |Authorization|Create localized resource json| |Authorization|Create new Custom IDP| This articles lists the audit activities that can be logged in your audit logs. |Authorization|Update policy| |Authorization|Update user attribute| |Authorization|Upload a CPIM encrypted key|-|Authorization|User Authorization: API is disabled for tenant featureset| +|Authorization|User Authorization: API is disabled for tenant feature set| |Authorization|User Authorization: User granted access as 'Tenant Admin'| |Authorization|User Authorization: User was granted 'Authenticated Users' access rights| |Authorization|Verify if B2C feature is enabled| This articles lists the audit activities that can be logged in your audit logs. |Authorization|Onboard to Azure AD Access Reviews| |Authorization|Unlink program control| |Authorization|Update program|-|Authorization|Disable Desktop Sso| -|Authorization|Disable Desktop Sso for a specific domain| +|Authorization|Disable Desktop SSO| +|Authorization|Disable Desktop SSO for a specific domain| |Authorization|Disable application proxy| |Authorization|Disable passthrough authentication|-|Authorization|Enable Desktop Sso| -|Directory Management|Enable Desktop Sso for a specific domain| +|Authorization|Enable Desktop SSO| +|Directory Management|Enable Desktop SSO for a specific domain| |Directory Management|Enable application proxy| |Directory Management|Enable passthrough authentication| |Directory Management|Create a custom domains in the tenant| |
active-directory | Reference Azure Ad Sla Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md | Title: Azure Active Directory SLA performance | Microsoft Docs description: Learn about the Azure AD SLA performance - Previously updated : 09/08/2022 Last updated : 10/31/2022 -As an identity admin, you may need to track Azure AD's service-level agreement (SLA) performance to make sure Azure AD can support your vital apps. This article shows how the Azure AD service has performed according to the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/). +As an identity admin, you may need to track the Azure Active Directory (Azure AD) service-level agreement (SLA) performance to make sure Azure AD can support your vital apps. This article shows how the Azure AD service has performed according to the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/). You can use this article in discussions with app or business owners to help them understand the performance they can expect from Azure AD. - ## Service availability commitment Microsoft offers Premium Azure AD customers the opportunity to get a service credit if Azure AD fails to meet the documented SLA. When you request a service credit, Microsoft evaluates the SLA for your specific tenant; however, this global SLA can give you an indication of the general health of Azure AD over time. The SLA covers the following scenarios that are vital to businesses: -- **User authentication:** Users are able to login to the Azure Active Directory service.+- **User authentication:** Users are able to sign in to the Azure AD service. -- **App access:** Azure Active Directory successfully emits the authentication and authorization tokens required for users to log into applications connected to the service.+- **App access:** Azure AD successfully emits the authentication and authorization tokens required for users to sign in to applications connected to the service. For full details on SLA coverage and instructions on requesting a service credit, see the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/). You rely on Azure AD to provide identity and access management for your vital sy To help you plan for moving workloads to Azure AD, we publish past SLA performance. These numbers show the level at which Azure AD met the requirements in the [SLA for Azure Active Directory (Azure AD)](https://azure.microsoft.com/support/legal/sla/active-directory/v1_1/), for all tenants. -For each month, we truncate the SLA attainment at three places after the decimal. Numbers are not rounded up, so actual SLA attainment is higher than indicated. -+The SLA attainment is truncated at three places after the decimal. Numbers are not rounded up, so actual SLA attainment is higher than indicated. | Month | 2021 | 2022 | | | | | For each month, we truncate the SLA attainment at three places after the decimal | November | 99.998% | | | December | 99.978% | | -- ### How is Azure AD SLA measured? The Azure AD SLA is measured in a way that reflects customer authentication experience, rather than simply reporting on whether the system is available to outside connections. This means that the calculation is based on whether: The Azure AD SLA is measured in a way that reflects customer authentication expe The numbers above are a global total of Azure AD authentications across all customers and geographies. - ## Incident history All incidents that seriously impact Azure AD performance are documented in the [Azure status history](https://azure.status.microsoft/status/history/). Not all events documented in Azure status history are serious enough to cause Azure AD to go below its SLA. You can view information about the impact of incidents, as well as a root cause analysis of what caused the incident and what steps Microsoft took to prevent future incidents. - -- ## Next steps * [Azure AD reports overview](overview-reports.md) |
active-directory | Reference Azure Monitor Sign Ins Log Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md | Title: Sign-in log schema in Azure Monitor | Microsoft Docs -description: Describe the Azure AD sign in log schema for use in Azure Monitor +description: Describe the Azure AD sign-in log schema for use in Azure Monitor -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ -This article describes the Azure Active Directory (Azure AD) sign-in log schema in Azure Monitor. Most of the information that's related to sign-ins is provided under the *Properties* attribute of the `records` object. +This article describes the Azure Active Directory (Azure AD) sign-in log schema in Azure Monitor. Information related to sign-ins is provided under the *Properties* attribute of the `records` object. ```json This article describes the Azure Active Directory (Azure AD) sign-in log schema | ResultDescription | N/A or blank | Provides the error description for the sign-in operation. | | riskDetail | riskDetail | Provides the 'reason' behind a specific state of a risky user, sign-in or a risk detection. The possible values are: `none`, `adminGeneratedTemporaryPassword`, `userPerformedSecuredPasswordChange`, `userPerformedSecuredPasswordReset`, `adminConfirmedSigninSafe`, `aiConfirmedSigninSafe`, `userPassedMFADrivenByRiskBasedPolicy`, `adminDismissedAllRiskForUser`, `adminConfirmedSigninCompromised`, `unknownFutureValue`. The value `none` means that no action has been performed on the user or sign-in so far. <br>**Note:** Details for this property require an Azure AD Premium P2 license. Other licenses return the value `hidden`. | | riskEventTypes | riskEventTypes | Risk detection types associated with the sign-in. The possible values are: `unlikelyTravel`, `anonymizedIPAddress`, `maliciousIPAddress`, `unfamiliarFeatures`, `malwareInfectedIPAddress`, `suspiciousIPAddress`, `leakedCredentials`, `investigationsThreatIntelligence`, `generic`, and `unknownFutureValue`. |-| authProcessingDetails | Azure AD app authentication library | Contains Family, Library, and Platform information in format: "Family: ADAL Library: ADAL.JS 1.0.0 Platform: JS" | +| authProcessingDetails | Azure AD app authentication library | Contains Family, Library, and Platform information in format: "Family: Microsoft Authentication Library: ADAL.JS 1.0.0 Platform: JS" | | authProcessingDetails | IsCAEToken | Values are True or False |-| riskLevelAggregated | riskLevel | Aggregated risk level. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in was not enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. | -| riskLevelDuringSignIn | riskLevel | Risk level during sign-in. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in was not enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. | +| riskLevelAggregated | riskLevel | Aggregated risk level. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in wasn't enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. | +| riskLevelDuringSignIn | riskLevel | Risk level during sign-in. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in wasn't enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. | | riskState | riskState | Reports status of the risky user, sign-in, or a risk detection. The possible values are: `none`, `confirmedSafe`, `remediated`, `dismissed`, `atRisk`, `confirmedCompromised`, `unknownFutureValue`. | | DurationMs | - | This value is unmapped, and you can safely ignore this field. | | CallerIpAddress | - | The IP address of the client that made the request. | |
active-directory | Reference Basic Info Sign In Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-basic-info-sign-in-logs.md | Title: Basic info in the Azure AD sign-in logs | Microsoft Docs description: Learn what the basic info in the sign-in logs is about. -+ - Previously updated : 08/26/2022- Last updated : 10/28/2022+ -Azure AD logs all sign-ins into an Azure tenant for compliance. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly. +Azure AD logs all sign-ins into an Azure tenant for compliance. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly. [Learn how to access, view, and analyze Azure AD sign-in logs](concept-sign-ins.md) This article explains the values on the Basic info tab of the sign-ins log. In Azure AD, a resource access has three relevant components: - **How** ΓÇô The client (Application) used for the access. - **What** ΓÇô The target (Resource) accessed by the identity. - Each component has an associated unique identifier (ID). Below is an example of user using the Microsoft Azure classic deployment model to access the Azure portal.  -### Tenant identifiers +### Tenant The sign-in log tracks two tenant identifiers: The request ID is an identifier that corresponds to an issued token. If you are The correlation ID groups sign-ins from the same sign-in session. The identifier was implemented for convenience. Its accuracy is not guaranteed because the value is based on parameters passed by a client. +### Sign-in -## Sign-in identifier --The sign-in identifier is a string the user provides to Azure AD to identify itself when attempting to sign-in. It's usually a UPN, but can be another identifier such as a phone number. +The sign-in identifier is a string the user provides to Azure AD to identify itself when attempting to sign-in. It's usually a user principal name (UPN), but can be another identifier such as a phone number. +### Authentication requirement -## Authentication requirement +This attribute shows the highest level of authentication needed through all the sign-in steps for the sign-in to succeed. Graph API supports `$filter` (`eq` and `startsWith` operators only). -This attribute shows the highest level of authentication needed through all the sign-in steps for the sign-in to succeed. In the Graph API, supports `$filter` (`eq` and `startsWith` operators only). --## Sign-in event types +### Sign-in event types Indicates the category of the sign in the event represents. For user sign-ins, the category can be `interactiveUser` or `nonInteractiveUser` and corresponds to the value for the **isInteractive** property on the sign-in resource. For managed identity sign-ins, the category is `managedIdentity`. For service principal sign-ins, the category is **servicePrincipal**. The Azure portal doesn't show this value, but the sign-in event is placed in the tab that matches its sign-in event type. Possible values are: Indicates the category of the sign in the event represents. For user sign-ins, t The Microsoft Graph API, supports: `$filter` (`eq` operator only) -## User type +### User type The type of a user. Examples include `member`, `guest`, or `external`. -## Cross-tenant access type +### Cross-tenant access type This attribute describes the type of cross-tenant access used by the actor to access the resource. Possible values are: This attribute describes the type of cross-tenant access used by the actor to ac If the sign-in did not the pass the boundaries of a tenant, the value is `none`. -## Conditional access evaluation +### Conditional access evaluation This value shows whether continuous access evaluation (CAE) was applied to the sign-in event. There are multiple sign-in requests for each authentication. Some are shown on the interactive tab, while others are shown on the non-interactive tab. CAE is only displayed as true for one of the requests, and it can be on the interactive tab or non-interactive tab. For more information, see [Monitor and troubleshoot sign-ins with continuous access evaluation in Azure AD](../conditional-access/howto-continuous-access-evaluation-troubleshoot.md). ------------------------------ ## Next steps -* [Sign-in logs in Azure Active Directory](concept-sign-ins.md) -* [What is the sign-in diagnostic in Azure AD?](overview-sign-in-diagnostics.md) +* [Learn about exporting Azure AD sign-in logs](concept-activity-logs-azure-monitor.md) +* [Explore the sign-in diagnostic in Azure AD](overview-sign-in-diagnostics.md) |
active-directory | Reference Powershell Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md | Title: Azure AD PowerShell cmdlets for reporting | Microsoft Docs description: Reference of the Azure AD PowerShell cmdlets for reporting. -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ -To install the public preview release, use the following. +To install the public preview release, use the following: ```powershell Install-module AzureADPreview ``` -For more information on how to connect to Azure AD using PowerShell, please see the article [Azure AD PowerShell for Graph](/powershell/azure/active-directory/install-adv2). +For more information on how to connect to Azure AD using PowerShell, see the article [Azure AD PowerShell for Graph](/powershell/azure/active-directory/install-adv2). With Azure Active Directory (Azure AD) reports, you can get details on activities around all the write operations in your direction (audit logs) and authentication data (sign-in logs). Although the information is available by using the MS Graph API, now you can retrieve the same data by using the Azure AD PowerShell cmdlets for reporting. |
active-directory | Reference Reports Data Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-data-retention.md | Title: How long does Azure AD store reporting data? | Microsoft Docs description: Learn how long Azure stores the various types of reporting data. documentationcenter: ''-+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ -In this article, you learn about the data retention policies for the different activity reports in Azure Active Directory. +In this article, you learn about the data retention policies for the different activity reports in Azure Active Directory (Azure AD). ### When does Azure AD start collecting data? | Azure AD Edition | Collection Start | | :-- | :-- | | Azure AD Premium P1 <br /> Azure AD Premium P2 | When you sign up for a subscription |-| Azure AD Free| The first time you open the [Azure Active Directory blade](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) or use the [reporting APIs](./overview-reports.md) | +| Azure AD Free| The first time you open [Azure Active Directory](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) or use the [reporting APIs](./overview-reports.md) | You can retain the audit and sign-in activity data for longer than the default r ### Can I see last month's data after getting an Azure AD premium license? -**No**, you can't. Azure stores up to seven days of activity data for a free version. This means, when you switch from a free to a premium version, you can only see up to 7 days of data. +**No**, you can't. Azure stores up to seven days of activity data for a free version. When you switch from a free to a premium version, you can only see up to 7 days of data. |
active-directory | Reference Reports Latencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-reports-latencies.md | Title: Azure Active Directory reporting latencies | Microsoft Docs description: Learn about the amount of time it takes for reporting events to show up in your Azure portal -+ - Previously updated : 08/26/2022- Last updated : 10/31/2022+ If you already have activities data with your free license, then you can see it There are two types of security reports: -- [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who is not the legitimate owner of a user account. +- [Risky sign-ins](../identity-protection/overview-identity-protection.md) - A risky sign-in is an indicator for a sign-in attempt that might have been performed by someone who isn't the legitimate owner of a user account. - [Users flagged for risk](../identity-protection/overview-identity-protection.md) - A risky user is an indicator for a user account that might have been compromised. The following table lists the latency information for security reports. |
active-directory | Troubleshoot Audit Data Verified Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-audit-data-verified-domain.md | Title: 'Troubleshoot audit data of verified domain change | Microsoft Docs' description: Provides you with information that will appear in the Azure Active Directory activity logs when you change a users verified domain. -+ Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Troubleshoot: Audit data on verified domain change -## I have a lot of changes to my users and I am not sure what the cause of it is. +## I have a lot of changes to my users and I'm not sure what the cause of it is. ### Symptoms -I check the Azure AD audit logs, and see multiple user updates occurring in my Azure AD tenant. These **Update User** events do not display **Actor** information, which causes uncertainty as to what/who triggered the mass changes to users. +I check the Azure AD audit logs, and see multiple user updates occurring in my Azure AD tenant. These **Update User** events don't display **Actor** information, which causes uncertainty as to what/who triggered the mass changes to users. ### Cause - A common reason behind mass object changes is a non-synchronous backend operation called **ProxyCalc**. **ProxyCalc** is the logic that determines the appropriate **UserPrincipalName** and **Proxy Addresses**, that are updated in Azure AD users, groups or contacts. The design behind **ProxyCalc** is to ensure that all **UserPrincipalName** and **Proxy Addresses** are consistent in Azure AD at any time. **ProxyCalc** must be triggered by an explicit change like a verified domain change and does not perpetually run in the background as a task. + A common reason behind mass object changes is a non-synchronous backend operation called **ProxyCalc**. **ProxyCalc** is the logic that determines the appropriate **UserPrincipalName** and **Proxy Addresses** that are updated in Azure AD users, groups, or contacts. The design behind **ProxyCalc** is to ensure that all **UserPrincipalName** and **Proxy Addresses** are consistent in Azure AD at any time. **ProxyCalc** must be triggered by an explicit change like a verified domain change and doesn't perpetually run in the background as a task. One of the admin tasks that could trigger **ProxyCalc** is whenever thereΓÇÖs a For example, if you add a verified domain Fabrikam.com to your Contoso.onmicrosoft.com tenant, this action will trigger a ProxyCalc operation on all objects in the tenant. This event will be captured in the Azure AD Audit logs as **Update User** events preceded by an **Add verified domain** event. On the other hand, if Fabrikam.com was removed from the Contoso.onmicrosoft.com tenant, then all the **Update User** events will be preceded by a **Remove verified domain** event. -#### Additional notes: +#### Notes: -ProxyCalc does not cause changes to certain objects that: +ProxyCalc doesn't cause changes to certain objects that: -- do not have an active Exchange license +- don't have an active Exchange license - have **MSExchRemoteRecipientType** set to Null -- are not considered a shared resource. Shared Resource is when **CloudMSExchRecipientDisplayType** contains one of the following values: **MailboxUser (shared)**, **PublicFolder**, **ConferenceRoomMailbox**, **EquipmentMailbox**, **ArbitrationMailbox**, **RoomList**, **TeamMailboxUser**, **Group mailbox**, **Scheduling mailbox**, **ACLableMailboxUser**, **ACLableTeamMailboxUser** +- aren't considered a shared resource. Shared Resource is when **CloudMSExchRecipientDisplayType** contains one of the following values: **MailboxUser (shared)**, **PublicFolder**, **ConferenceRoomMailbox**, **EquipmentMailbox**, **ArbitrationMailbox**, **RoomList**, **TeamMailboxUser**, **Group mailbox**, **Scheduling mailbox**, **ACLableMailboxUser**, **ACLableTeamMailboxUser** In order to build more correlation between these two disparate events, Microsoft is working on updating the **Actor** info in the audit logs to identify these changes as triggered by a verified domain change. This action will help check when the verified domain change event took place and started to mass update the objects in their tenant. -Additionally, in most cases, there are no changes to users as their **UserPrincipalName** and **Proxy Addresses** are consistent, so we are working to display in Audit Logs only those updates that caused an actual change to the object. This action will prevent noise in the audit logs and help admins correlate the remaining user changes to verified domain change event as explained above. +Additionally, in most cases, there are no changes to users as their **UserPrincipalName** and **Proxy Addresses** are consistent, so we're working to display in Audit Logs only those updates that caused an actual change to the object. This action will prevent noise in the audit logs and help admins correlate the remaining user changes to verified domain change event as explained above. ## Next Steps |
active-directory | Troubleshoot Graph Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-graph-api.md | Title: 'Troubleshoot errors in Azure Active Directory reporting API | Microsoft Docs' description: Provides you with a resolution to errors while calling Azure Active Directory Reporting APIs. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ Accessing sign-in reports requires an Azure Active Directory premium 1 (P1) lice If you see this error message while trying to access audit logs or sign-ins using the API, make sure that your account is part of the **Security Reader** or **Report Reader** role in your Azure Active Directory tenant. -### Error: Application missing AAD 'Read directory data' permission +### Error: Application missing Azure AD 'Read directory data' permission Follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions. |
active-directory | Troubleshoot Missing Audit Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-missing-audit-data.md | Title: 'Troubleshoot Missing data in activity logs | Microsoft Docs' description: Provides you with a resolution to missing data in Azure Active Directory activity logs. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ -I performed some actions in the Azure portal and expected to see the audit logs for those actions in the `Activity logs > Audit Logs` blade, but I canΓÇÖt find them. +I performed some actions in the Azure portal and expected to see the audit logs for those actions in the `Activity logs > Audit Logs`, but I canΓÇÖt find them.  Actions donΓÇÖt appear immediately in the activity logs. The table below enumera ### Resolution -Wait for 15 minutes to two hours and see if the actions appear in the log. If you donΓÇÖt see the logs even after two hours, please [file a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and we will look into it. +Wait for 15 minutes to two hours and see if the actions appear in the log. If you donΓÇÖt see the logs even after two hours, [file a support request,](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and we'll look into it. ## I canΓÇÖt find recent user sign-ins in the Azure Active Directory sign-ins activity log ### Symptoms -I recently signed into the Azure portal and expected to see the sign-in logs for those actions in the `Activity logs > Sign-ins` blade, but I canΓÇÖt find them. +I recently signed into the Azure portal and expected to see the sign-in logs for those actions in the `Activity logs > Sign-ins`, but I canΓÇÖt find them.  Actions donΓÇÖt appear immediately in the activity logs. The table below enumera ### Resolution -Wait for 15 minutes to two hours and see if the actions appear in the log. If you donΓÇÖt see the logs even after two hours, please [file a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and we will look into it. +Wait for 15 minutes to two hours and see if the actions appear in the log. If you donΓÇÖt see the logs even after two hours, [file a support request,](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and we'll look into it. ## I can't view more than 30 days of report data in the Azure portal Depending on your license, Azure Active Directory Actions stores activity report | Report | Azure AD Free | Azure AD Premium P1 | Azure AD Premium P2 | | | | | | | Directory Audit | 7 days | 30 days | 30 days |-| Sign-in Activity | Not available. You can access your own sign-ins for 7 days from the individual user profile blade | 30 days | 30 days | +| Sign-in Activity | Not available. You can access your own sign-ins for 7 days from the individual user profile | 30 days | 30 days | For more information, see [Azure Active Directory report retention policies](reference-reports-data-retention.md). |
active-directory | Troubleshoot Missing Data Download | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-missing-data-download.md | Title: 'Troubleshooting: Missing data in the downloaded activity logs | Microsoft Docs' description: Provides you with a resolution to missing data in downloaded Azure Active Directory activity logs. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ When you download activity logs in the Azure portal, we limit the scale to 250,0 ## Resolution -You can leverage [Azure AD Reporting APIs](concept-reporting-api.md) to fetch up to a million records at any given point. +You can use [Azure AD Reporting APIs](concept-reporting-api.md) to fetch up to a million records at any given point. ## Next steps |
active-directory | Workbook Authentication Prompts Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-authentication-prompts-analysis.md | Title: Authentication prompts analysis workbook in Azure AD | Microsoft Docs description: Learn how to use the authentication prompts analysis workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ This article provides you with an overview of this workbook. Have you recently heard of complaints from your users about getting too many authentication prompts? -Over prompting users impacts your user's productivity and often leads users getting phished for MFA. To be clear, MFA is essential! We are not talking about if you should require MFA but how frequently you should prompt your users. +Overprompting users can affect your user's productivity and often leads users getting phished for MFA. To be clear, MFA is essential! We are not talking about if you should require MFA but how frequently you should prompt your users. Typically, this scenario is caused by: In many environments, the most used apps are business productivity apps. Anythin  -The prompts by application list-view shows additional information such as timestamps, and request IDs that help with investigations. +The prompts by application list view shows additional information such as timestamps, and request IDs that help with investigations. Additionally, you get a summary of the average and median prompts count for your tenant. Filtering for a specific user that has many authentication requests or only show ## Best practices -If data isn't showing up or seems to be showing up incorrectly, please confirm that you have set the **Log Analytics Workspace** and **Subscriptions** on the proper resources. +If data isn't showing up or seems to be showing up incorrectly, confirm that you have set the **Log Analytics Workspace** and **Subscriptions** on the proper resources.  If the visuals are taking too much time to load, try reducing the Time filter to ## Next steps -- To understand more about the different policies that impact MFA prompts, see [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md). +- To understand more about the different policies that affect MFA prompts, see [Optimize reauthentication prompts and understand session lifetime for Azure AD Multi-Factor Authentication](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md). -- To learn more about the different vulnerabilities of different MFA methods, see [All your creds are belong to us!](https://aka.ms/allyourcreds).+- To learn more about the different vulnerabilities of different MFA methods, see [All your creds belong to us!](https://aka.ms/allyourcreds). - To learn how to move users from telecom-based methods to the Authenticator app, see [How to run a registration campaign to set up Microsoft Authenticator - Microsoft Authenticator app](../authentication/how-to-mfa-registration-campaign.md). |
active-directory | Workbook Conditional Access Gap Analyzer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-conditional-access-gap-analyzer.md | Title: Conditional access gap analyzer workbook in Azure AD | Microsoft Docs description: Learn how to use the conditional access gap analyzer workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ The workbook has four sections: - Users signing in using legacy authentication -- Number of sign-ins by applications that are not impacted by conditional access policies +- Number of sign-ins by applications that aren't impacted by conditional access policies - High risk sign-in events bypassing conditional access policies -- Number of sign-ins by location that were not affected by conditional access policies +- Number of sign-ins by location that weren't affected by conditional access policies  |
active-directory | Workbook Cross Tenant Access Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-cross-tenant-access-activity.md | Title: Cross-tenant access activity workbook in Azure AD | Microsoft Docs description: Learn how to use the cross-tenant access activity workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ |
active-directory | Workbook Legacy Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-legacy authentication.md | Title: Sign-ins using legacy authentication workbook in Azure AD | Microsoft Docs description: Learn how to use the sign-ins using legacy authentication workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ -Have you ever wondered how you can determine whether it is safe to turn off legacy authentication in your tenant? The sign-ins using legacy authentication workbook helps you to answer this question. +Have you ever wondered how you can determine whether it's safe to turn off legacy authentication in your tenant? The sign-ins using legacy authentication workbook helps you to answer this question. This article gives you an overview of this workbook. Examples of applications that commonly or only use legacy authentication are: - Apps using legacy auth with mail protocols like POP, IMAP, and SMTP AUTH. -Single-factor authentication (for example, username and password) doesnΓÇÖt provide the required level of protection for todayΓÇÖs computing environments. Passwords are bad as they are easy to guess and humans are bad at choosing good passwords. +Single-factor authentication (for example, username and password) doesnΓÇÖt provide the required level of protection for todayΓÇÖs computing environments. Passwords are bad as they're easy to guess and humans are bad at choosing good passwords. Unfortunately, legacy authentication: -- Does not support multi-factor authentication (MFA) or other strong authentication methods.+- Doesn't support multi-factor authentication (MFA) or other strong authentication methods. - Makes it impossible for your organization to move to passwordless authentication. This workbook supports multiple filters: - Many email protocols that once relied on legacy authentication now support more secure modern authentication methods. If you see legacy email authentication protocols in this workbook, consider migrating to modern authentication for email instead. For more information, see [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online). -- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it is using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it is using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook. +- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it's using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it's using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook. ## Next steps |
active-directory | Workbook Risk Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-risk-analysis.md | Title: Identity protection risk analysis workbook in Azure AD | Microsoft Docs description: Learn how to use the identity protection risk analysis workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ |
active-directory | Workbook Sensitive Operations Report | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-sensitive-operations-report.md | Title: Sensitive operations report workbook in Azure AD | Microsoft Docs description: Learn how to use the sensitive operations report workbook. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ -As an It administrator, you need to be able to identify compromises in your environment to ensure that you can keep it in a healthy state. +As an IT administrator, you need to be able to identify compromises in your environment to ensure that you can keep it in a healthy state. The sensitive operations report workbook is intended to help identify suspicious application and service principal activity that may indicate compromises in your environment. This article provides you with an overview of this workbook. This workbook identifies recent sensitive operations that have been performed in your tenant and which may service principal compromise. -If your organization is new to Azure monitor workbooks, you need to integrate your Azure AD sign-in and audit logs with Azure Monitor before accessing the workbook. This allows you to store, and query, and visualize your logs using workbooks for up to two years. Only sign-in and audit events created after Azure Monitor integration will be stored, so the workbook will not contain insights prior to that date. Learn more about the prerequisites to Azure Monitor workbooks for Azure Active Directory. If you have previously integrated your Azure AD sign-in and audit logs with Azure Monitor, you can use the workbook to assess past information. +If your organization is new to Azure monitor workbooks, you need to integrate your Azure AD sign-in and audit logs with Azure Monitor before accessing the workbook. This integration allows you to store, and query, and visualize your logs using workbooks for up to two years. Only sign-in and audit events created after Azure Monitor integration will be stored, so the workbook won't contain insights prior to that date. Learn more about the prerequisites to Azure Monitor workbooks for Azure Active Directory. If you've previously integrated your Azure AD sign-in and audit logs with Azure Monitor, you can use the workbook to assess past information. This workbook is split into four sections:  -- **Modified application and service principal credentials/authentication methods** - This report flags actors who have recently changed many service principal credentials, as well as how many of each type of service principal credentials have been changed.+- **Modified application and service principal credentials/authentication methods** - This report flags actors who have recently changed many service principal credentials, and how many of each type of service principal credentials have been changed. - **New permissions granted to service principals** - This workbook also highlights recently granted OAuth 2.0 permissions to service principals. This section includes the following data to help you detect: - All new credentials added to apps and service principals, including the credential type -- Top actors and the amount of credentials modifications they performed+- Top actors and the number of credentials modifications they performed - A timeline for all credential changes This section includes the following data to help you detect: ### New permissions granted to service principals -In cases where the attacker cannot find a service principal or an application with a high privilege set of permissions through which to gain access, they will often attempt to add the permissions to another service principal or app. +In cases where the attacker can't find a service principal or an application with a high privilege set of permissions through which to gain access, they'll often attempt to add the permissions to another service principal or app. This section includes a breakdown of the AppOnly permissions grants to existing service principals. Admins should investigate any instances of excessive high permissions being granted, including, but not limited to, Exchange Online, Microsoft Graph and Azure AD Graph. This section includes an overview of all changes made to service principal membe Another common approach to gain a long-term foothold in the environment is to: - Modify the tenantΓÇÖs federated domain trusts.-- Add an additional SAML IDP that is controlled by the attacker as a trusted authentication source. +- Add another SAML IDP that is controlled by the attacker as a trusted authentication source. This section includes the following data: This paragraph lists the supported filters for each section. **Use:** -- **Modified application and service principal credentials** to look out for credentials being added to service principals that are not frequently used in your organization. Use the filters present in this section to further investigate any of the suspicious actors or service principals that were modified.+- **Modified application and service principal credentials** to look out for credentials being added to service principals that aren't frequently used in your organization. Use the filters present in this section to further investigate any of the suspicious actors or service principals that were modified. - **New permissions granted to service principals** to look out for broad or excessive permissions being added to service principals by actors that may be compromised. |
aks | Concepts Clusters Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md | Title: Concepts - Kubernetes basics for Azure Kubernetes Services (AKS) description: Learn the basic cluster and workload components of Kubernetes and how they relate to features in Azure Kubernetes Service (AKS) Previously updated : 03/05/2020 Last updated : 10/31/2022 Most stateless applications in AKS should use the deployment model rather than s You don't want to disrupt management decisions with an update process if your application requires a minimum number of available instances. *Pod Disruption Budgets* define how many replicas in a deployment can be taken down during an update or node upgrade. For example, if you have *five (5)* replicas in your deployment, you can define a pod disruption of *4 (four)* to only allow one replica to be deleted or rescheduled at a time. As with pod resource limits, best practice is to define pod disruption budgets on applications that require a minimum number of replicas to always be present. -Deployments are typically created and managed with `kubectl create` or `kubectl apply`. Create a deployment by defining a manifest file in the YAML format. +Deployments are typically created and managed with `kubectl create` or `kubectl apply`. Create a deployment by defining a manifest file in the YAML format. The following example creates a basic deployment of the NGINX web server. The deployment specifies *three (3)* replicas to be created, and requires port *80* to be open on the container. Resource requests and limits are also defined for CPU and memory. spec: memory: 256Mi ``` +A breakdown of the deployment specifications in the YAML manifest file is as follows: ++| Specification | Description | +| -- | - | +| `.apiVersion` | Specifies the API group and API resource you want to use when creating the resource. | +| `.kind` | Specifies the type of resource you want to create. | +| `.metadata.name` | Specifies the image to run. This file will run the *nginx* image from Docker Hub. | +| `.spec.replicas` | Specifies how many pods to create. This file will create three deplicated pods. | +| `.spec.selector` | Specifies which pods will be affected by this deployment. | +| `.spec.selector.matchLabels` | Contains a map of *{key, value}* pairs that allows the deployment to find and manage the created pods. | +| `.spec.selector.matchLabels.app` | Has to match `.spec.template.metadata.labels`. | +| `.spec.template.labels` | Specifies the *{key, value}* pairs attached to the object. | +| `.spec.template.app` | Has to match `.spec.selector.matchLabels`. | +| `.spec.spec.containers` | Specifies the list of containers belonging to the pod. | +| `.spec.spec.containers.name` | Specifies the name of the container specified as a DNS label. | +| `.spec.spec.containers.image` | Specifies the container image name. | +| `.spec.spec.containers.ports` | Specifies the list of ports to expose from the container. | +| `.spec.spec.containers.ports.containerPort` | Specifies the number of port to expose on the pod's IP address. | +| `.spec.spec.resources` | Specifies the compute resources required by the container. | +| `.spec.spec.resources.requests` | Specifies the minimum amount of compute resources required. | +| `.spec.spec.resources.requests.cpu` | Specifies the minimum amount of CPU required. | +| `.spec.spec.resources.requests.memory` | Specifies the minimum amount of memory required. | +| `.spec.spec.resources.limits` | Specifies the maximum amount of compute resources allowed. This limit is enforced by the kubelet. | +| `.spec.spec.resources.limits.cpu` | Specifies the maximum amount of CPU allowed. This limit is enforced by the kubelet. | +| `.spec.spec.resources.limits.memory` | Specifies the maximum amount of memory allowed. This limit is enforced by the kubelet. | + More complex applications can be created by including services (such as load balancers) within the YAML manifest. For more information, see [Kubernetes deployments][kubernetes-deployments]. |
aks | Deploy Marketplace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md | Title: How to deploy Azure Container offers for Azure Kubernetes Service (AKS) from the Azure Marketplace -description: Learn how to deploy Azure Container offers from the Azure Marketplace on an Azure Kubernetes Service (AKS) cluster. + Title: Deploy an Azure container offer from Azure Marketplace +description: Learn how to deploy Azure container offers from Azure Marketplace on an Azure Kubernetes Service (AKS) cluster. Last updated 09/30/2022 -# How to deploy a Container offer from Azure Marketplace (preview) +# Deploy a container offer from Azure Marketplace (preview) -[Azure Marketplace][azure-marketplace] is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace you can find, try, buy, and deploy the software and services you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and also consulting services from Microsoft partners. +[Azure Marketplace][azure-marketplace] is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In Azure Marketplace, you can find, try, buy, and deploy the software and services that you need to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and consulting services from Microsoft partners. -Included among these solutions are Kubernetes application-based Container offers, which contain applications meant to run on Kubernetes clusters such as Azure Kubernetes Service (AKS). In this article, you will learn how to: +Included among these solutions are Kubernetes application-based container offers. These offers contain applications that are meant to run on Kubernetes clusters such as Azure Kubernetes Service (AKS). In this article, you'll learn how to: -- Browse offers in Azure Marketplace-- Purchase an application-- Deploy the application on your AKS cluster-- Monitor usage and billing information+- Browse offers in Azure Marketplace. +- Purchase an application. +- Deploy the application on your AKS cluster. +- Monitor usage and billing information. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!NOTE]-> This feature is currently only supported in the following regions: +> This feature is currently supported only in the following regions: > > - West Central US > - West Europe-> - East US. +> - East US ## Register resource providers -You must have registered the `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` providers on your subscription using the `az provider register` command: +Before you deploy a container offer, you must register the `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` providers on your subscription by using the `az provider register` command: ```azurecli-interactive az provider register --namespace Microsoft.ContainerService --wait az provider register --namespace Microsoft.KubernetesConfiguration --wait ``` -## Browse offers +## Select and deploy a Kubernetes offer -- Begin by visiting the Azure portal and searching for *"Marketplace"* in the top search bar.+1. In the [Azure portal](https://ms.portal.azure.com/), search for **Marketplace** on the top search bar. In the results, under **Services**, select **Marketplace**. -- You can search for an offer or publisher directly by name or browse all offers. To find Kubernetes application offers, use the *Product type* filter for *Azure Containers*. +1. You can search for an offer or publisher directly by name, or you can browse all offers. To find Kubernetes application offers, use the **Product Type** filter for **Azure Containers**. -- > [!IMPORTANT]- > The *Azure Containers* category includes both Kubernetes applications and standalone container images. This walkthrough is Kubernetes application-specific. If you find the steps to deploy an offer differ in some way, you are most likely trying to deploy a container image-based offer instead of a Kubernetes-application based offer. - > - > To ensure you're searching for Kubernetes applications, include the term `KubernetesApps` in your search. + :::image type="content" source="./media/deploy-marketplace/browse-marketplace-inline.png" alt-text="Screenshot of Azure Marketplace offers in the Azure portal, with the filter for product type set to Azure containers." lightbox="./media/deploy-marketplace/browse-marketplace-full.png"::: -- Once you've decided on an application, click on the offer.+ > [!IMPORTANT] + > The **Azure Containers** category includes both Kubernetes applications and standalone container images. This walkthrough is specific to Kubernetes applications. If you find that the steps to deploy an offer differ in some way, you're most likely trying to deploy a container image-based offer instead of a Kubernetes application-based offer. + > + > To ensure that you're searching for Kubernetes applications, include the term **KubernetesApps** in your search. - :::image type="content" source="./media/deploy-marketplace/browse-marketplace-inline.png" alt-text="Screenshot of the Azure portal Marketplace offer page. The product type filter, set to Azure Containers, is highlighted and several offers are shown." lightbox="./media/deploy-marketplace/browse-marketplace-full.png"::: +1. After you decide on an application, select the offer. -## Purchasing a Kubernetes offer +1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**. -- Review the plan and prices tab, select an option, and ensure the terms are acceptable before proceeding.+ :::image type="content" source="./media/deploy-marketplace/plans-pricing-inline.png" alt-text="Screenshot of the offer purchasing page in the Azure portal, including plan and pricing information." lightbox="./media/deploy-marketplace/plans-pricing-full.png"::: - :::image type="content" source="./media/deploy-marketplace/plans-pricing-inline.png" alt-text="Screenshot of the Azure portal offer purchasing page. The tab for viewing plans and pricing information is shown." lightbox="./media/deploy-marketplace/plans-pricing-full.png"::: +1. Follow each page in the wizard, all the way through **Review + Create**. Fill in information for your resource group, your cluster, and any configuration options that the application requires. You can decide to deploy on a new AKS cluster or use an existing cluster. -- Click *"Create"*.+ :::image type="content" source="./media/deploy-marketplace/purchase-experience-inline.png" alt-text="Screenshot of the Azure portal wizard for deploying a new offer, with the selector for creating a new cluster or using an existing cluster." lightbox="./media/deploy-marketplace/purchase-experience-full.png"::: -## Deploy a Kubernetes offer + When the application is deployed, the portal shows **Your deployment is complete**, along with details of the deployment. -- Follow the form, filling in information for your resource group, cluster, and any configuration options required by the application. You can decide to deploy on a new AKS cluster or use an existing cluster.+ :::image type="content" source="./media/deploy-marketplace/deployment-inline.png" alt-text="Screenshot of the Azure portal that shows a successful resource deployment to the cluster." lightbox="./media/deploy-marketplace/deployment-full.png"::: - :::image type="content" source="./media/deploy-marketplace/purchase-experience-inline.png" alt-text="Screenshot of the Azure portal form for deploying a new offer. A selector for creating a new or using an existing cluster is shown." lightbox="./media/deploy-marketplace/purchase-experience-full.png"::: +1. Verify the deployment by using the following command to list the extensions that are running on your cluster: -- After some time, the application will be deployed, as indicated by the Portal screen.+ ```azurecli-interactive + az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters + ``` - :::image type="content" source="./media/deploy-marketplace/deployment-inline.png" alt-text="Screenshot of the Azure portal screen showing a successful resource deployment, indicating the offer has been deployed to the cluster." lightbox="./media/deploy-marketplace/deployment-full.png"::: +## Manage the offer lifecycle -- You can also verify by listing the extensions running on your cluster:+For lifecycle management, an Azure Kubernetes offer is represented as a cluster extension for AKS. For more information, seeΓÇ»[Cluster extensions for AKS][cluster-extensions]. - ```azurecli-interactive - az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters - ``` --## Manage offer lifecycle --For lifecycle management, an Azure Kubernetes offer is represented as a cluster extension for Azure Kubernetes service(AKS). For more details, seeΓÇ»[cluster extensions for AKS][cluster-extensions]. --Purchasing an offer from the Azure Marketplace creates a new instance of the extension on your AKS cluster. The extension instance can be viewed from the cluster using the following command: +Purchasing an offer from Azure Marketplace creates a new instance of the extension on your AKS cluster. You can view the extension instance from the cluster by using the following command: ```azurecli-interactive az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` -### Removing an offer +## Monitor billing and usage information -A purchased Azure Container offer plan can be deleted by deleting the extension instance on the cluster. For example: +To monitor billing and usage information for the offer that you deployed: -```azurecli-interactive -az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters -``` +1. In the Azure portal, go to the page for your cluster's resource group. -## Monitor billing and usage information +1. Select **Cost Management** > **Cost analysis**. Under **Product**, you can see a cost breakdown for the plan that you selected. ++ :::image type="content" source="./media/deploy-marketplace/billing-inline.png" alt-text="Screenshot of the Azure portal page for a resource group, with billing information broken down by offer plan." lightbox="./media/deploy-marketplace/billing-full.png"::: ++## Remove an offer -To monitor billing and usage information for the offer you've deployed, visit Cost Management > Cost Analysis in your cluster's resource group's page in the Azure portal. You can see a breakdown of cost for the plan you've selected under "Product". +You can delete a purchased plan for an Azure container offer by deleting the extension instance on the cluster. For example: +```azurecli-interactive +az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters +``` -## Next Steps +## Next steps - Learn more about [exploring and analyzing costs][billing]. <!-- LINKS --> [azure-marketplace]: /marketplace/azure-marketplace-overview [cluster-extensions]: ./cluster-extensions.md-[billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md +[billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md |
aks | Quick Kubernetes Deploy Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md | Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster by using Bi description: Learn how to quickly create a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS) Previously updated : 08/11/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure. Two [Kubernetes Services][kubernetes-service] are also created: app: azure-vote-front ``` + For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```console |
aks | Quick Kubernetes Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md | Title: 'Quickstart: Deploy an AKS cluster by using Azure CLI' description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI. Previously updated : 06/28/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure. Two [Kubernetes Services][kubernetes-service] are also created: app: azure-vote-front ``` + For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```console |
aks | Quick Kubernetes Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md | Two Kubernetes Services are also created: app: azure-vote-front ``` + For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + 1. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest: ```console |
aks | Quick Kubernetes Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md | Title: 'Quickstart: Deploy an AKS cluster by using PowerShell' description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 04/29/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure. Two [Kubernetes Services][kubernetes-service] are also created: app: azure-vote-front ``` + For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```azurepowershell-interactive |
aks | Quick Kubernetes Deploy Rm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md | Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS) Previously updated : 08/17/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure. Two [Kubernetes Services][kubernetes-service] are also created: app: azure-vote-front ``` + For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```console |
aks | Quick Windows Container Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md | description: Learn how to quickly create a Kubernetes cluster, deploy an applica Previously updated : 04/29/2022 Last updated : 11/01/2022 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure. spec: app: sample ``` +For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: ```console |
aks | Quick Windows Container Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md | Title: Create a Windows Server container on an AKS cluster by using PowerShell description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 04/29/2022 Last updated : 11/01/2022 spec: app: sample ``` +For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: |
aks | Private Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md | As mentioned, virtual network peering is one way to access your private cluster. 2. The private DNS zone is linked only to the VNet that the cluster nodes are attached to (3). This means that the private endpoint can only be resolved by hosts in that linked VNet. In scenarios where no custom DNS is configured on the VNet (default), this works without issue as hosts point at 168.63.129.16 for DNS that can resolve records in the private DNS zone because of the link. -3. In scenarios where the VNet containing your cluster has custom DNS settings (4), cluster deployment fails unless the private DNS zone is linked to the VNet that contains the custom DNS resolvers (5). This link can be created manually after the private zone is created during cluster provisioning or via automation upon detection of creation of the zone using event-based deployment mechanisms (for example, Azure Event Grid and Azure Functions). +3. In scenarios where the VNet containing your cluster has custom DNS settings (4), cluster deployment fails unless the private DNS zone is linked to the VNet that contains the custom DNS resolvers (5). This link can be created manually after the private zone is created during cluster provisioning or via automation upon detection of creation of the zone using event-based deployment mechanisms (for example, Azure Event Grid and Azure Functions). To avoid cluster failure during initial deployment, the cluster can be deployed with the private DNS zone resource ID. This only works with resource type Microsoft.ContainerService/managedCluster and API version 2022-07-01. Using an older version with an ARM template or Bicep resource definition is not supported. > [!NOTE] > Conditional Forwarding doesn't support subdomains. Once the A record is created, link the private DNS zone to the virtual network t [container-registry-private-link]: ../container-registry/container-registry-private-link.md [virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server [virtual-networks-168.63.129.16]: ../virtual-network/what-is-ip-address-168-63-129-16.md-[use-custom-domains]: coredns-custom.md#use-custom-domains +[use-custom-domains]: coredns-custom.md#use-custom-domains |
api-management | Api Management Howto Create Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-create-groups.md | In API Management, groups are used to manage the visibility of products to devel API Management has the following immutable system groups: -* **Administrators** - Azure subscription administrators are members of this group. Administrators manage API Management service instances, creating the APIs, operations, and products that are used by developers. +* **Administrators** - Azure subscription administrators are members of this group. Administrators manage API Management service instances, creating the APIs, operations, and products that are used by developers. You can't add users to this group. ++ > [!NOTE] + > You can change the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that are used in notifications sent to developers from your API Management instance. + * **Developers** - Authenticated developer portal users fall into this group. Developers are the customers that build applications using your APIs. Developers are granted access to the developer portal and build applications that call the operations of an API. * **Guests** - Unauthenticated developer portal users, such as prospective customers visiting the developer portal of an API Management instance fall into this group. They can be granted certain read-only access, such as the ability to view APIs but not call them. Once the association is added between the developer and the group, you can view * Once a developer is added to a group, they can view and subscribe to the products associated with that group. For more information, see [How create and publish a product in Azure API Management][How create and publish a product in Azure API Management], * In addition to creating and managing groups in the Azure portal, you can create and manage your groups using the API Management REST API [Group](/rest/api/apimanagement/apimanagementrest/azure-api-management-rest-api-group-entity) entity.-* Learn how to manage the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that asre used in notifications to developers from your API Management instance. +* Learn how to manage the administrator [email settings](api-management-howto-configure-notifications.md#configure-email-settings) that are used in notifications to developers from your API Management instance. [Create a group]: #create-group |
api-management | Api Management Howto Mutual Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates.md | $context = New-AzApiManagementContext -resourcegroup 'ContosoResourceGroup' -ser New-AzApiManagementBackend -Context $context -Url 'https://contoso.com/myapi' -Protocol http -SkipCertificateChainValidation $true ``` +You can also disable certificate chain validation by using the [Backend](/rest/api/apimanagement/current-ga/backend) REST API. + ## Delete a client certificate To delete a certificate, select it and then select **Delete** from the context menu (**...**). |
api-management | Configure Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-custom-domain.md | If you use Azure Key Vault to manage a custom domain TLS certificate, make sure To fetch a TLS/SSL certificate, API Management must have the list and get secrets permissions on the Azure Key Vault containing the certificate. * When you use the Azure portal to import the certificate, all the necessary configuration steps are completed automatically. * When you use command-line tools or management API, these permissions must be granted manually, in two steps:- 1. On the **Managed identities** page of your API Management instance, enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md). Note the principal Id on that page. - 1. Give the list and get secrets permissions to this principal Id on the Azure Key Vault containing the certificate. + 1. On the **Managed identities** page of your API Management instance, enable a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md). Note the principal ID on that page. + 1. Give the list and get secrets permissions to this principal ID on the Azure Key Vault containing the certificate. If the certificate is set to `autorenew` and your API Management tier has an SLA (that is, in all tiers except the Developer tier), API Management will pick up the latest version automatically, without downtime to the service. Choose the steps according to the [domain certificate](#domain-certificate-optio ### CNAME record -Configure a CNAME record that points from your custom domain name (for example, `api.contoso.com`) to your API Management service hostname (for example, `<apim-service-name>.azure-api.net`). A CNAME record is more stable than an A-record in case the IP address changes. For more information, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#changes-to-the-ip-addresses) and the [API Management FAQ](./api-management-faq.yml#how-can-i-secure-the-connection-between-the-api-management-gateway-and-my-back-end-services-). +Configure a CNAME record that points from your custom domain name (for example, `api.contoso.com`) to your API Management service hostname (for example, `<apim-service-name>.azure-api.net`). A CNAME record is more stable than an A-record in case the IP address changes. For more information, see [IP addresses of Azure API Management](api-management-howto-ip-addresses.md#changes-to-the-ip-addresses) and the [API Management FAQ](./api-management-faq.yml#how-can-i-secure-the-connection-between-the-api-management-gateway-and-my-backend-services-). > [!NOTE] > Some domain registrars only allow you to map subdomains when using a CNAME record, such as `www.contoso.com`, and not root names, such as `contoso.com`. For more information on CNAME records, see the documentation provided by your registrar or [IETF Domain Names - Implementation and Specification](https://tools.ietf.org/html/rfc1035). |
api-management | Devops Api Development Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md | An API developer writes an API definition by providing a specification, settings There are several tools to assist producing the API definition: * The [Azure API Management DevOps Resource Toolkit][4] includes two tools that provide an Azure Resource Manager (ARM) template. The _extractor_ creates an ARM template by extracting an API definition from an API Management service. The _creator_ produces the ARM template from a YAML specification. The DevOps Resource Toolkit supports SOAP, REST, and GraphQL APIs.-* The [Azure APIOps Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. APIOps supports REST only at this time. +* The [Azure APIOps Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. APIOps supports REST and GraphQL APIs at this time. * The [dotnet-apim][6] tool converts a well-formed YAML definition into an ARM template for later deployment. The tool is focused on REST APIs. * [Terraform][7] is an alternative to Azure Resource Manager to configure resources in Azure. You can create a Terraform configuration (together with policies) to implement the API in the same way that an ARM template is created. Review [Automated API deployments with APIOps][28] in the Azure Architecture Cen [25]: https://azure.microsoft.com/services/devops/ [26]: https://github.com/microsoft/api-guidelines/blob/vNext/azure/Guidelines.md [27]: https://github.com/Azure/azure-api-style-guide-[28]: /azure/architecture/example-scenario/devops/automated-api-deployments-apiops +[28]: /azure/architecture/example-scenario/devops/automated-api-deployments-apiops |
app-service | Configure Vnet Integration Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-vnet-integration-routing.md | Title: Configure virtual network integration with application routing. -description: This how-to article walks you through configuring app routing on a regional virtual network integration. + Title: Configure virtual network integration with application and configuration routing. +description: This how-to article walks you through configuring routing on a regional virtual network integration. Last updated 10/20/2021 # Manage Azure App Service virtual network integration routing -When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your Azure virtual network (VNet). This article describes how to configure application routing. +Through application routing or configuration routing options, you can configure what traffic will be sent through the virtual network integration. See the [overview section](./overview-vnet-integration.md#routes) for more details. ## Prerequisites -Your app is already integrated using the regional VNet integration feature. +Your app is already integrated using the regional virtual network integration feature. -## Configure in the Azure portal +## Configure application routing ++Application routing defines what traffic is routed from your app and into the virtual network. We recommend that you use the **Route All** site setting to enable routing of all traffic. Using the configuration setting allows you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing `WEBSITE_VNET_ROUTE_ALL` app setting can still be used, and you can enable all traffic routing with either setting. ++### Configure in the Azure portal Follow these steps to disable **Route All** in your app through the portal. Follow these steps to disable **Route All** in your app through the portal. 1. Select **Yes** to confirm. -## Configure with the Azure CLI +### Configure with the Azure CLI ++You can also configure **Route All** by using the Azure CLI. ++```azurecli-interactive +az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetRouteAllEnabled [true|false] +``` ++## Configure configuration routing ++When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration. ++### Container image pull -You can also configure **Route All** by using the Azure CLI. The minimum az version required is 2.27.0. +Routing container image pull over virtual network integration can be configured using the Azure CLI. ```azurecli-interactive-az webapp config set --resource-group <group-name> --name <app-name> --vnet-route-all-enabled [true|false] +az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetImagePullEnabled [true|false] ``` -## Configure with Azure PowerShell +We recommend that you use the site property to enable routing image pull traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_PULL_IMAGE_OVER_VNET` app setting with the value `true` can still be used, and you can enable routing through the virtual network with either setting. -```azurepowershell -# Parameters -$siteName = '<app-name>' -$resourceGroupName = '<group-name>' +### Content storage -# Configure VNet Integration -$webApp = Get-AzResource -ResourceType Microsoft.Web/sites -ResourceGroupName $resourceGroupName -ResourceName $siteName -Set-AzResource -ResourceId ($webApp.Id + "/config/web") -Properties @{ vnetRouteAllEnabled = $true } -Force +Routing content storage over virtual network integration can be configured using the Azure CLI. In addition to enabling the feature, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445. ++```azurecli-interactive +az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetContentStorageEnabled [true|false] ``` +We recommend that you use the site property to enable content storage traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_CONTENTOVERVNET` app setting with the value `1` can still be used, and you can enable routing through the virtual network with either setting. + ## Next steps - [Enable virtual network integration](./configure-vnet-integration-enable.md) |
app-service | Configure Network Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/configure-network-settings.md | The setting is also available for configuration through Azure portal at the App :::image type="content" source="./media/configure-network-settings/configure-allow-incoming-ftp-connections.png" alt-text="Screenshot from Azure portal of how to configure your App Service Environment to allow incoming ftp connections."::: -In addition to enabling access, you need to ensure that you have [configured DNS if you are using ILB App Service Environment](./networking.md#dns-configuration-for-ftp-access). +In addition to enabling access, you need to ensure that you have [configured DNS if you are using ILB App Service Environment](./networking.md#dns-configuration-for-ftp-access) and that the [necessary ports](./networking.md#ports-and-network-restrictions) are unblocked. ## Remote debugging access |
app-service | How To Custom Domain Suffix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md | If you don't have an App Service Environment, see [How to Create an App Service The custom domain suffix defines a root domain that can be used by the App Service Environment. In the public variation of Azure App Service, the default root domain for all web apps is *azurewebsites.net*. For ILB App Service Environments, the default root domain is *appserviceenvironment.net*. However, since an ILB App Service Environment is internal to a customer's virtual network, customers can use a root domain in addition to the default one that makes sense for use within a company's internal virtual network. For example, a hypothetical Contoso Corporation might use a default root domain of *internal-contoso.com* for apps that are intended to only be resolvable and accessible within Contoso's virtual network. An app in this virtual network could be reached by accessing *APP-NAME.internal-contoso.com*. -The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *APP-NAME.scm.ASE-NAME.appserviceenvironment.net*. - The custom domain suffix is for the App Service Environment. This feature is different from a custom domain binding on an App Service. For more information on custom domain bindings, see [Map an existing custom DNS name to Azure App Service](../app-service-web-tutorial-custom-domain.md). +If the certificate used for the custom domain suffix contains a Subject Alternate Name (SAN) entry for **.scm.CUSTOM-DOMAIN*, the scm site will then also be reachable from *APP-NAME.scm.CUSTOM-DOMAIN*. You can only access scm over custom domain using basic authentication. Single sign-on is only possible with the default root domain. + ## Prerequisites - ILB variation of App Service Environment v3. The certificate for custom domain suffix must be stored in an Azure Key Vault. A :::image type="content" source="./media/custom-domain-suffix/key-vault-networking.png" alt-text="Screenshot of a sample networking page for key vault to allow custom domain suffix feature."::: -Your certificate must be a wildcard certificate for the selected custom domain name. For example, *contoso.com* would need a certificate covering **.contoso.com*. +Your certificate must be a wildcard certificate for the selected custom domain name. For example, *internal-contoso.com* would need a certificate covering **.internal-contoso.com*. If the certificate used custom domain suffix contains a Subject Alternate Name (SAN) entry for scm, for example **.scm.internal-contoso.com*, the scm site will also available using the custom domain suffix. ::: zone pivot="experience-azp" If you want to use your own DNS server, add the following records: 1. Create a zone for your custom domain. 1. Create an A record in that zone that points * to the inbound IP address used by your App Service Environment. 1. Create an A record in that zone that points @ to the inbound IP address used by your App Service Environment.+1. Optionally create a zone for scm sub-domain with a * A record that points to the inbound IP address used by your App Service Environment To configure DNS in Azure DNS private zones: To configure DNS in Azure DNS private zones: :::image type="content" source="./media/custom-domain-suffix/custom-domain-suffix-dns-configuration.png" alt-text="Screenshot of a sample DNS configuration for your custom domain suffix."::: 1. Link your Azure DNS private zone to your App Service Environment's virtual network. :::image type="content" source="./media/custom-domain-suffix/private-dns-zone-vnet-link.png" alt-text="Screenshot of a sample virtual network link for private DNS zone.":::+1. Optionally create an A record in that zone that points *.scm to the inbound IP address used by your App Service Environment. For more information on configuring DNS for your domain, see [Use an App Service Environment](./using.md#dns-configuration). |
app-service | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md | The normal app access ports inbound are as follows: |Visual Studio remote debugging|4022, 4024, 4026| |Web Deploy service|8172| +> [!NOTE] +> For FTP access, even if you want to disallow standard FTP on port 21, you still need to allow traffic from the LoadBalancer to the App Service Environment subnet range, as this is used for internal health ping traffic for the ftp service specifically. + ## Network routing You can set route tables without restriction. You can tunnel all of the outbound application traffic from your App Service Environment to an egress firewall device, such as Azure Firewall. In this scenario, the only thing you have to worry about is your application dependencies. |
app-service | Overview Access Restrictions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md | For any rule, regardless of type, you can add http header filtering. Http header * **X-Forwarded-For**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-For) for identifying the originating IP address of a client connecting through a proxy server. Accepts valid CIDR values. * **X-Forwarded-Host**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-Host) for identifying the original host requested by the client. Accepts any string up to 64 characters in length.-* **X-Azure-FDID**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) for identifying the reverse proxy instance. Azure Front Door will send a guid identifying the instance, but it can also be used by third party proxies to identify the specific instance. Accepts any string up to 64 characters in length. -* **X-FD-HealthProbe**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) for identifying the health probe of the reverse proxy. Azure Front Door will send "1" to uniquely identify a health probe request. The header can also be used by third party proxies to identify health probes. Accepts any string up to 64 characters in length. +* **X-Azure-FDID**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) for identifying the reverse proxy instance. Azure Front Door will send a guid identifying the instance, but it can also be used by third party proxies to identify the specific instance. Accepts any string up to 64 characters in length. +* **X-FD-HealthProbe**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) for identifying the health probe of the reverse proxy. Azure Front Door will send "1" to uniquely identify a health probe request. The header can also be used by third party proxies to identify health probes. Accepts any string up to 64 characters in length. Some use cases for http header filtering are: * Restrict access to traffic from proxy servers forwarding the host name |
app-service | Overview Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md | When regional virtual network integration is enabled, your app makes outbound ca When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network, and outbound traffic to the internet will go through the same channels as normal. -The feature supports only one virtual interface per worker. One virtual interface per worker means one regional virtual network integration per App Service plan. All the apps in the same App Service plan can only use the same virtual network integration to a specific subnet. If you need an app to connect to another virtual network or another subnet in the same virtual network, you need to create another App Service plan. The virtual interface used isn't a resource that customers have direct access to. +The feature supports two virtual interface per worker. Two virtual interfaces per worker means two regional virtual network integrations per App Service plan. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet. If you need an app to connect to additional virtual networks or additional subnets in the same virtual network, you need to create another App Service plan. The virtual interfaces used isn't a resource that customers have direct access to. Because of the nature of how this technology operates, the traffic that's used with virtual network integration doesn't show up in Azure Network Watcher or NSG flow logs. Application routing applies to traffic that is sent from your app after it has b * Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet. * When **Route All** is enabled, the source address for your outbound public traffic from your app is still one of the IP addresses that are listed in your app properties. If you route your traffic through a firewall or a NAT gateway, the source IP address will then originate from this service. -Learn [how to configure application routing](./configure-vnet-integration-routing.md). --We recommend that you use the **Route All** configuration setting to enable routing of all traffic. Using the configuration setting allows you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing `WEBSITE_VNET_ROUTE_ALL` app setting can still be used, and you can enable all traffic routing with either setting. +Learn [how to configure application routing](./configure-vnet-integration-routing.md#configure-application-routing). > [!NOTE] > Outbound SMTP connectivity (port 25) is supported for App Service when the SMTP traffic is routed through the virtual network integration. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). When you're using virtual network integration, you can configure how parts of th Bringing your own storage for content in often used in Functions where [content storage](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app. -To route content storage traffic through the virtual network integration, you need to add an app setting named `WEBSITE_CONTENTOVERVNET` with the value `1`. In addition to adding the app setting, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445. +To route content storage traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure content storage routing](./configure-vnet-integration-routing.md#content-storage). ++In addition to configuring the routing, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445. ##### Container image pull -When using custom containers for Linux, you can pull the container over the virtual network integration. To route the container pull traffic through the virtual network integration, you must add an app setting named `WEBSITE_PULL_IMAGE_OVER_VNET` with the value `true`. +When using custom containers, you can pull the container over the virtual network integration. To route the container pull traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure image pull routing](./configure-vnet-integration-routing.md#container-image-pull). ##### App settings using Key Vault references App settings using Key Vault references will attempt to get secrets over the public route. If the Key Vault is blocking public traffic and the app is using virtual network integration, an attempt will then be made to get the secrets through the virtual network integration. > [!NOTE]-> * Windows containers don't support pulling custom container images over virtual network integration. > * Backup/restore to private storage accounts is currently not supported. > * Configure SSL/TLS certificates from private Key Vaults is currently not supported. > * App Service Logs to private storage accounts is currently not supported. We recommend using Diagnostics Logging and allowing Trusted Services for the storage account. There are some limitations with using regional virtual network integration: * The integration subnet can't have [service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) enabled. * The integration subnet can be used by only one App Service plan. * You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network.-* You can have only one regional virtual network integration per App Service plan. Multiple apps in the same App Service plan can use the same virtual network. +* You can have two regional virtual network integration per App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration. * You can't change the subscription of an app or a plan while there's an app that's using regional virtual network integration. ## Gateway-required virtual network integration |
app-service | Reference App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md | Title: Environment variables and app settings reference description: Describes the commonly used environment variables, and which ones can be modified with app settings. Previously updated : 02/15/2022 Last updated : 11/01/2022 # Environment variables and app settings in Azure App Service For more information on deployment slots, see [Set up staging environments in Az | Setting name| Description | Example | |-|-|-|-|`WEBSITE_SLOT_NAME`| Read-only. Name of the current deployment slot. The name of the production slot is `Production`. || |`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS`| By default, the versions for site extensions are specific to each slot. This prevents unanticipated application behavior due to changing extension versions after a swap. If you want the extension versions to swap as well, set to `0` on *all slots*. || |`WEBSITE_OVERRIDE_PRESERVE_DEFAULT_STICKY_SLOT_SETTINGS`| Designates certain settings as [sticky or not swappable by default](deploy-staging-slots.md#which-settings-are-swapped). Default is `true`. Set this setting to `false` or `0` for *all deployment slots* to make them swappable instead. There's no fine-grain control for specific setting types. || |`WEBSITE_SWAP_WARMUP_PING_PATH`| Path to ping to warm up the target slot in a swap, beginning with a slash. The default is `/`, which pings the root path over HTTP. | `/statuscheck` | |
app-service | Reference Dangling Subdomain Prevention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-dangling-subdomain-prevention.md | Learn more about Subdomain Takeover at [Dangling DNS and subdomain takeover](/az Azure App Service provides [Name Reservation Service](#how-app-service-prevents-subdomain-takeovers) and [domain verification tokens](#how-you-can-prevent-subdomain-takeovers) to prevent subdomain takeovers. ## How App Service prevents subdomain takeovers -Upon deletion of an App Service app, the corresponding DNS is reserved. During the reservation period, reuse of the DNS is forbidden except for subscriptions belonging to tenant of the subscription originally owning the DNS. +Upon deletion of an App Service app or App Service Environment (ASE), the corresponding DNS is reserved. During the reservation period, reuse of the DNS is forbidden except for subscriptions belonging to tenant of the subscription originally owning the DNS. -After the reservation expires, the DNS is free to be claimed by any subscription. By Name Reservation Service, the customer is afforded some time to either clean-up any associations/pointers to said DNS or reclaim the DNS in Azure. The DNS name being reserved can be derived by appending 'azurewebsites.net'. Name Reservation Service is enabled by default on Azure App Service and doesn't require more configuration. +After the reservation expires, the DNS is free to be claimed by any subscription. By Name Reservation Service, the customer is afforded some time to either clean-up any associations/pointers to said DNS or reclaim the DNS in Azure. The DNS name being reserved for web apps can be derived by appending 'azurewebsites.net' and for ASE can be derived by appending 'appserviceenvironment.net'. Name Reservation Service is enabled by default on Azure App Service and doesn't require more configuration. #### Example scenario |
attestation | Azure Tpm Vbs Attestation Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/azure-tpm-vbs-attestation-usage.md | + + Title: Azure TPM VBS attestation usage +description: Learn about how to apply TPM and VBS attestation ++++ Last updated : 09/05/2022+++++# Using TPM/VBS attestation ++Attestation can be integrated into various applications and services, catering to different use cases. Azure Attestation service, which acts the remote attestation service can be used for desired purposes by updating the attestation policy. The policy engine works as processor, which takes the incoming payload as evidence and performs the validations as authored in the policy. This architecture simplifies the workflow and enables the service owner to purpose build solutions for the varied platforms and use cases.The workflow remains the same as described in [Azure attestation workflow](workflow.md). The attestation policy needs to be crafted as per the validations required. ++Attesting a platform has its own challenges with its varied components of boot and setup, one needs to rely on a hardware root-of-trust anchor which can be used to verify the first steps of the boot and extend that trust upwards into every layer on your system. A hardware TPM provides such an anchor for a remote attestation solution. Azure Attestation provides a highly scalable measured boot and runtime integrity measurement attestation solution with a revocation framework to give you full control over platform attestation. ++## Attestation steps ++Attestation Setup has two setups. One pertaining to the service setup and one pertaining to the client setup. +++Detailed information about the workflow is described in [Azure attestation workflow](workflow.md). ++### Service endpoint setup: +This is the first step for any attestation to be performed. Setting up an endpoint, this can be performed either via code or using the Azure portal. ++Here's how you can set up an attestation endpoint using Portal ++1 Prerequisite: Access to the Microsoft Azure Active Directory(Azure AD) tenant and subscription under which you want to create the attestation endpoint. +Learn more about setting up an [Azure AD tenant](../active-directory/develop/quickstart-create-new-tenant.md). ++2 Create an endpoint under the desired resource group, with the desired name. +> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5azcU] ++3 Add Attestation Contributor Role to the Identity who will be responsible to update the attestation policy. +> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5aoRj] ++4 Configure the endpoint with the required policy. +> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5aoRk] ++Sample policies can be found in the [policy section](tpm-attestation-sample-policies.md). ++> [!NOTE] +> TPM endpoints are designed to be provisioned without a default attestation policy. +++### Client setup: +A client to communicate with the attestation service endpoint needs to ensure it's following the protocol as described in the [protocol documentation](virtualization-based-security-protocol.md). Use the [Attestation Client NuGet](https://www.nuget.org/packages/Microsoft.Attestation.Client) to ease the integration. + +1 Prerequisite: An Azure AD identity is needed to access the TPM endpoint. +Learn more [Azure AD identity tokens](../active-directory/develop/v2-overview.md). ++2 Add Attestation Reader Role to the identity that will be need for authentication against the endpoint. Azure i +> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5aoRi] +++## Execute the attestation workflow: +Using the [Client](https://github.com/microsoft/Attestation-Client-Samples) to trigger an attestation flow. A successful attestation will result in an attestation report (encoded JWT token). Parsing the JWT token, the contents of the report can be easily validated against expected outcome. ++> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5azcT] +++Here's a sample of the contents of the attestation report. ++Using the Open ID [metadata endpoint](/rest/api/attestation/metadata-configuration/get?tabs=HTTP) contains properties, which describe the attestation service.The signing keys describe the keys, which will be used to sign tokens generated by the attestation service. All tokens emitted by the attestation service will be signed by one of the certificates listed in the attestation signing keys. ++## Next steps +- [Set up Azure Attestation using PowerShell](quickstart-powershell.md) +- [Attest an SGX enclave using code samples](/samples/browse/?expanded=azure&terms=attestation) +- [Learn more about policy](policy-reference.md) |
attestation | Policy Examples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-examples.md | Title: Examples of an Azure Attestation policy + Title: Examples of an Azure SGX Attestation policy description: Examples of Azure Attestation policy. eyJhbGciOiJub25lIn0.eyJBdHRlc3RhdGlvblBvbGljeSI6ICJkbVZ5YzJsdmJqMGdNUzR3TzJGMWRH eyJhbGciOiJSU0EyNTYiLCJ4NWMiOlsiTUlJQzFqQ0NBYjZnQXdJQkFnSUlTUUdEOUVGakJcdTAwMkJZd0RRWUpLb1pJaHZjTkFRRUxCUUF3SWpFZ01CNEdBMVVFQXhNWFFYUjBaWE4wWVhScGIyNURaWEowYVdacFkyRjBaVEF3SGhjTk1qQXhNVEl6TVRneU1EVXpXaGNOTWpFeE1USXpNVGd5TURVeldqQWlNU0F3SGdZRFZRUURFeGRCZEhSbGMzUmhkR2x2YmtObGNuUnBabWxqWVhSbE1EQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUpyRWVNSlo3UE01VUJFbThoaUNLRGh6YVA2Y2xYdkhmd0RIUXJ5L3V0L3lHMUFuMGJ3MVU2blNvUEVtY2FyMEc1WmYxTUR4alZOdEF5QjZJWThKLzhaQUd4eFFnVVZsd1dHVmtFelpGWEJVQTdpN1B0NURWQTRWNlx1MDAyQkJnanhTZTBCWVpGYmhOcU5zdHhraUNybjYwVTYwQUU1WFx1MDAyQkE1M1JvZjFUUkNyTXNLbDRQVDRQeXAzUUtNVVlDaW9GU3d6TkFQaU8vTy9cdTAwMkJIcWJIMXprU0taUXh6bm5WUGVyYUFyMXNNWkptRHlyUU8vUFlMTHByMXFxSUY2SmJsbjZEenIzcG5uMXk0Wi9OTzJpdFBxMk5Nalx1MDAyQnE2N1FDblNXOC9xYlpuV3ZTNXh2S1F6QVR5VXFaOG1PSnNtSThUU05rLzBMMlBpeS9NQnlpeDdmMTYxQ2tjRm1LU3kwQ0F3RUFBYU1RTUE0d0RBWURWUjBUQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZ1ZKVWRCaXRud3ZNdDdvci9UMlo4dEtCUUZsejFVcVVSRlRUTTBBcjY2YWx2Y2l4VWJZR3gxVHlTSk5pbm9XSUJROU9QamdMa1dQMkVRRUtvUnhxN1NidGxqNWE1RUQ2VjRyOHRsejRISjY0N3MyM2V0blJFa2o5dE9Gb3ZNZjhOdFNVeDNGTnBhRUdabDJMUlZHd3dcdTAwMkJsVThQd0gzL2IzUmVCZHRhQTdrZmFWNVx1MDAyQml4ZWRjZFN5S1F1VkFUbXZNSTcxM1A4VlBsNk1XbXNNSnRrVjNYVi9ZTUVzUVx1MDAyQkdZcU1yN2tLWGwxM3lldUVmVTJWVkVRc1ovMXRnb29iZVZLaVFcdTAwMkJUcWIwdTJOZHNcdTAwMkJLamRIdmFNYngyUjh6TDNZdTdpR0pRZnd1aU1tdUxSQlJwSUFxTWxRRktLNmRYOXF6Nk9iT01zUjlpczZ6UDZDdmxGcEV6bzVGUT09Il19.eyJBdHRlc3RhdGlvblBvbGljeSI6ImRtVnljMmx2YmoweExqQTdZWFYwYUc5eWFYcGhkR2x2Ym5KMWJHVnpJSHRqT2x0MGVYQmxQVDBpSkdsekxXUmxZblZuWjJGaWJHVWlYU0FtSmlCYmRtRnNkV1U5UFhSeWRXVmRJRDAtSUdSbGJua29LVHM5UGlCd1pYSnRhWFFvS1R0OU8ybHpjM1ZoYm1ObGNuVnNaWE1nZXlBZ0lDQmpPbHQwZVhCbFBUMGlKR2x6TFdSbFluVm5aMkZpYkdVaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKT2IzUkVaV0oxWjJkaFlteGxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdJQ0FnSUdNNlczUjVjR1U5UFNJa2FYTXRaR1ZpZFdkbllXSnNaU0pkSUQwLUlHbHpjM1ZsS0hSNWNHVTlJbWx6TFdSbFluVm5aMkZpYkdVaUxDQjJZV3gxWlQxakxuWmhiSFZsS1RzZ0lDQWdZenBiZEhsd1pUMDlJaVJ6WjNndGJYSnphV2R1WlhJaVhTQTlQaUJwYzNOMVpTaDBlWEJsUFNKelozZ3RiWEp6YVdkdVpYSWlMQ0IyWVd4MVpUMWpMblpoYkhWbEtUc2dJQ0FnWXpwYmRIbHdaVDA5SWlSelozZ3RiWEpsYm1Oc1lYWmxJbDBnUFQ0Z2FYTnpkV1VvZEhsd1pUMGljMmQ0TFcxeVpXNWpiR0YyWlNJc0lIWmhiSFZsUFdNdWRtRnNkV1VwT3lBZ0lDQmpPbHQwZVhCbFBUMGlKSEJ5YjJSMVkzUXRhV1FpWFNBOVBpQnBjM04xWlNoMGVYQmxQU0p3Y205a2RXTjBMV2xrSWl3Z2RtRnNkV1U5WXk1MllXeDFaU2s3SUNBZ0lHTTZXM1I1Y0dVOVBTSWtjM1p1SWwwZ1BUNGdhWE56ZFdVb2RIbHdaVDBpYzNadUlpd2dkbUZzZFdVOVl5NTJZV3gxWlNrN0lDQWdJR002VzNSNWNHVTlQU0lrZEdWbElsMGdQVDRnYVhOemRXVW9kSGx3WlQwaWRHVmxJaXdnZG1Gc2RXVTlZeTUyWVd4MVpTazdmVHMifQ.c0l-xqGDFQ8_kCiQ0_vvmDQYG_u544CYmoiucPNxd9MU8ZXT69UD59UgSuya2yl241NoVXA_0LaMEB2re0JnTbPD_dliJn96HnIOqnxXxRh7rKbu65ECUOMWPXbyKQMZ0I3Wjhgt_XyyhfEiQGfJfGzA95-wm6yWqrmW7dMI7JkczG9ideztnr0bsw5NRsIWBXOjVy7Bg66qooTnODS_OqeQ4iaNsN-xjMElHABUxXhpBt2htbhemDU1X41o8clQgG84aEHCgkE07pR-7IL_Fn2gWuPVC66yxAp00W1ib2L-96q78D9J52HPdeDCSFio2RL7r5lOtz8YkQnjacb6xA ``` -## Sample policy for TPM using Policy version 1.0 --``` -version=1.0; --authorizationrules { - => permit(); -}; --issuancerules -{ -[type=="aikValidated", value==true]&& -[type=="secureBootEnabled", value==true] && -[type=="bootDebuggingDisabled", value==true] && -[type=="vbsEnabled", value==true] && -[type=="notWinPE", value==true] && -[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=true); -}; -``` --A simple TPM attestation policy that can be used to verify minimal aspects of the boot. --## Sample policy for TPM using Policy version 1.2 --``` -version=1.2; --configurationrules{ - => issueproperty(type="required_pcr_mask", value=131070); - => issueproperty(type="require_valid_aik_cert", value=false); -}; --authorizationrules { -c:[type == "tpmVersion", issuer=="AttestationService", value==2] => permit(); -}; --issuancerules{ --c:[type == "aikValidated", issuer=="AttestationService"] =>issue(type="aikValidated", value=c.value); --// SecureBoot enabled -c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); -c:[type == "efiConfigVariables", issuer=="AttestationPolicy"]=> issue(type = "SecureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); -![type=="SecureBootEnabled", issuer=="AttestationPolicy"] => issue(type="SecureBootEnabled", value=false); --// Retrieve bool properties Code integrity -c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `13` || PcrIndex == `19` || PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY")); -c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY"))); -c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=ContainsOnlyValue(c.value, true)); -![type=="CodeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=false); --// Bitlocker Boot Status, The first non zero measurement or zero. -c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY")); -c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="BitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK | @[? Value != `0`].Value | @[0]"))); -[type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=true); -![type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=false); --// Elam Driver (windows defender) Loaded -c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] | @ != `null`"))); -[type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=true); -![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=false); --}; --``` --The policy uses the TPM version to restrict attestation calls. The issuancerules looks at various properties measured during boot. - ## Next steps - [How to author and sign an attestation policy](author-sign-policy.md) |
attestation | Policy Version 1 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-version-1-2.md | Some of the key rules you can use to generate claims are listed here. |Feature |Description |Policy rule | |--|-|--|-| Secure boot |Device boots using only software that's trusted by the OEM, which is Microsoft. | `c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] \| length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); \![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);` | -| Code integrity |Code integrity is a feature that validates the integrity of a driver or system file each time it's loaded into memory.| `// Retrieve bool propertiesc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));\![type=="codeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=false);` | -|BitLocker [Boot state] |Used for encryption of device drives.| `// Bitlocker Boot Status, The first non zero measurement or zero.c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => issue(type="bitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK \| @[? Value != `0`].Value \| @[0]")));\![type=="bitlockerStatus"] => issue(type="bitlockerStatus", value=0);Nonzero means enabled.` | -| Early Launch Antimalware (ELAM) | ELAM protects against loading unsigned or malicious drivers during boot. | `// Elam Driver (windows defender) Loaded.c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] \| [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') \|\| equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] \| @ != `null`")));![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=false);` | -| Boot debugging |Allows the user to connect to a boot debugger. Can be used to bypass secure boot and other boot protections. | `// Boot debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING")));c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false);` | -| Kernel debugging | Allows the user to connect a kernel debugger. Grants access to all system resources (less virtualization-based security [VBS] protected resources). | `// Kernel Debuggingc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="osKernelDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_OSKERNELDEBUG")));c:[type=="osKernelDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=ContainsOnlyValue(c.value, false));\![type=="osKernelDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=false);` | -|Data Execution Prevention (DEP) policy | DEP policy is a set of hardware and software technologies that perform extra checks on memory to help prevent malicious code from running on a system. | `// DEP Policyc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="depPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_DATAEXECUTIONPREVENTION.Value \| @[-1]")));\![type=="depPolicy"] => issue(type="depPolicy", value=0);` | -| Test and flight signing | Enables the user to run test-signed code. | `// Test Signing< c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY")); c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="testSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_TESTSIGNING"))); c:[type=="testSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=ContainsOnlyValue(c.value, false)); ![type=="testSigningDisabled", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=false);//Flight Signingc:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="flightSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_FLIGHTSIGNING")));c:[type=="flightSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=ContainsOnlyValue(c.value, false));![type=="flightSigningNotEnabled", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=false);` | -| Virtual Secure Mode/VBS | VBS uses the Windows hypervisor to create this virtual secure mode that's used to protect vital system and operating system resources and credentials. | `// VSM enabled c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_VBS_VSM_REQUIRED")));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT")));c:[type=="vsmEnabledSet", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=ContainsOnlyValue(c.value, true));![type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=false);c:[type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value);` | -| Hypervisor-protected code integrity (HVCI) | HVCI is a feature that validates the integrity of a system file each time it's loaded into memory.| `// HVCIc:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY \| @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value")));c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1));![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);` | -| Input-output memory management unit (IOMMU) | IOMMU translates virtual to physical memory addresses for Direct memory access-capable device peripherals. IOMMU protects sensitive memory regions. | `// IOMMUc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="iommuEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_IOMMU_REQUIRED")));c:[type=="iommuEnabledSet", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=ContainsOnlyValue(c.value, true));![type=="iommuEnabled", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=false);` | -| PCR value evaluation | PCRs contain measurements of components that are made during the boot. These measurements can be used to verify the components against golden or known measurements. | `//PCRS are only read-only and thus cannot be used with issue operation, but they can be used to validate expected/golden measurements.c:[type == "pcrs", issuer=="AttestationService"] && c1:[type=="pcrMatchesExpectedValue", value==JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA1 \| @[0] == `\"KCk6Ow\"`"))] => issue(claim = c1);` | -| Boot Manager version | The security version number of the Boot Manager that was loaded during initial boot on the attested device. | `// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVN// Find the first EV_SEPARATOR in PCR 12, 13, Or 14c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVNc:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] => add(type="bootMgrSvnSeqQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN] \| @[0].EventSeq"));c1:[type=="bootMgrSvnSeqQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="bootMgrSvnSeq", value=JmesPath(c2.value, c1.value));c:[type=="bootMgrSvnSeq", value!="null", issuer=="AttestationPolicy"] => add(type="bootMgrSvnQuery", value=AppendString(AppendString("Events[? EventSeq == `", c.value), "`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootMgrSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootMgrSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` | -| Safe mode | Safe mode is a troubleshooting option for Windows that starts your computer in a limited state. Only the basic files and drivers necessary to run Windows are started. | `// Safe modec:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="safeModeEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_SAFEMODE")));c:[type=="safeModeEnabledSet", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=ContainsOnlyValue(c.value, false));![type=="notSafeMode", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=true);` | -| WinPE boot | Windows pre-installation Environment (Windows PE) is a minimal operating system with limited services that's used to prepare a computer for Windows installation, to copy disk images from a network file server, and to initiate Windows setup. | `// Win PEc:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="winPEEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_WINPE")));c:[type=="winPEEnabledSet", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=ContainsOnlyValue(c.value, false));![type=="notWinPE", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=true);` | -| Code integrity (CI) policy | Hash of CI policy that's controlling the security of the boot environment. | `// CI Policyc :[type=="events", issuer=="AttestationService"] => issue(type="codeIntegrityPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_SI_POLICY[].RawData")));`| -| Secure Boot Configuration Policy Hash (SBCPHash) | SBCPHash is the fingerprint of the Custom SBCP that was loaded during boot in Windows devices, except PCs. | `// Secure Boot Custom Policyc:[type=="events", issuer=="AttestationService"] => issue(type="secureBootCustomPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && PcrIndex == `7` && ProcessedData.UnicodeName == 'CurrentPolicy' && ProcessedData.VariableGuid == '77FA9ABD-0359-4D32-BD60-28F4E78F784B'].ProcessedData.VariableData \| @[0]")));` | -| Boot application subversion | The version of the Boot Manager that's running on the device. | `// Find the first EV_SEPARATOR in PCR 12, 13, Or 14, the ordering of the events is critical to ensure correctness.c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` "); // No restriction of EV_SEPARATOR in case it is not present// Find the first EVENT_TRANSFER_CONTROL with value 1 or 2 in PCR 12 that is before the EV_SEPARATORc1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="bootMgrSvnSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepAfterBootMgrSvnClause", value=AppendString(AppendString(AppendString(c1.value, "&& EventSeq >= `"), c2.value), "`"));c:[type=="beforeEvSepAfterBootMgrSvnClause", issuer=="AttestationPolicy"] => add(type="tranferControlQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`&& (ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `1` \|\| ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `2`)] \| @[0].EventSeq"));c1:[type=="tranferControlQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="tranferControlSeq", value=JmesPath(c2.value, c1.value));// Find the first non-null EVENT_MODULE_SVN in PCR 13 after the transfer control.c:[type=="tranferControlSeq", value!="null", issuer=="AttestationPolicy"] => add(type="afterTransferCtrlClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="afterTransferCtrlClause", issuer=="AttestationPolicy"] => add(type="moduleQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13` && ((ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]) \|\| (ProcessedData.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]))].EventSeq \| @[0]"));c1:[type=="moduleQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="moduleSeq", value=JmesPath(c2.value, c1.value));// Find the first EVENT_APPLICATION_SVN after EV_EVENT_TAG in PCR 12. That value is Boot App SVNc:[type=="moduleSeq", value!="null", issuer=="AttestationPolicy"] => add(type="applicationSvnAfterModuleClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="applicationSvnAfterModuleClause", issuer=="AttestationPolicy"] => add(type="bootAppSvnQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));c1:[type=="bootAppSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootAppSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` | -| Boot revision list | Boot revision list used to direct the device to an enterprise honeypot to further monitor the device's activities. | `// Boot Rev List Info c:[type=="events", issuer=="AttestationService"] => issue(type="bootRevListInfo", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_BOOT_REVOCATION_LIST.RawData \| @[0]")));` | +| Secure boot |Device boots using only software that's trusted by the OEM. | ` c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));`<br>` => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] \| length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'")));`<br>` ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false);` | +| Code integrity |Code integrity is a feature that validates the integrity of a driver or system file each time it's loaded into memory.| `// Retrieve bool properties `<br>` c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY")));`<br>`c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=ContainsOnlyValue(c.value, true));`<br>`![type=="codeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="codeIntegrityEnabled", value=false);` | +|BitLocker [Boot state] |Used for encryption of device drives.| `// Bitlocker Boot Status, The first non zero measurement or zero.`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => issue(type="bitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK \| @[? Value != `0`].Value \| @[0]")));`<br>`![type=="bitlockerStatus"] => issue(type="bitlockerStatus", value=0);` | +| Early Launch Antimalware (ELAM) | ELAM protects against loading unsigned or malicious drivers during boot. | `// Elam Driver (windows defender) Loaded.`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] \| [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') \|\| equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] \| @ != `null`")));`<br>`![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="elamDriverLoaded", value=false);` | +| Boot debugging |Allows the user to connect to a boot debugger. Can be used to bypass secure boot and other boot protections. | `// Boot debugging`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING")));`<br>`c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false));`<br>`![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false);` | +| Kernel debugging | Allows the user to connect a kernel debugger. Grants access to all system resources (less virtualization-based security [VBS] protected resources). | `// Kernel Debugging`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="osKernelDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_OSKERNELDEBUG")));`<br>`c:[type=="osKernelDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=ContainsOnlyValue(c.value, false));`<br>`![type=="osKernelDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="osKernelDebuggingDisabled", value=false);` | +|Data Execution Prevention (DEP) policy | DEP policy is a set of hardware and software technologies that perform extra checks on memory to help prevent malicious code from running on a system. | `// DEP Policy`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="depPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_DATAEXECUTIONPREVENTION.Value \| @[-1]")));`<br>`![type=="depPolicy"] => issue(type="depPolicy", value=0);` | +| Test and flight signing | Enables the user to run test-signed code. | `// Test Signing `<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>` c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="testSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_TESTSIGNING")));`<br>` c:[type=="testSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=ContainsOnlyValue(c.value, false));`<br>` ![type=="testSigningDisabled", issuer=="AttestationPolicy"] => issue(type="testSigningDisabled", value=false);`<br>`//Flight Signingc:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="flightSigningEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_FLIGHTSIGNING")));`<br>`c:[type=="flightSigningEnabledSet", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=ContainsOnlyValue(c.value, false));`<br>`![type=="flightSigningNotEnabled", issuer=="AttestationPolicy"] => issue(type="flightSigningNotEnabled", value=false);` | +| Virtual Secure Mode/VBS | VBS uses the Windows hypervisor to create this virtual secure mode that's used to protect vital system and operating system resources and credentials. | `// VSM enabled `<br>` c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_VBS_VSM_REQUIRED")));`<br>`c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vsmEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT")));`<br>`c:[type=="vsmEnabledSet", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=ContainsOnlyValue(c.value, true));`<br>`![type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vsmEnabled", value=false);`<br>`c:[type=="vsmEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value);` | +| Hypervisor-protected code integrity (HVCI) | HVCI is a feature that validates the integrity of a system file each time it's loaded into memory.| `// HVCI`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY \| @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value")));`<br>`c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1));`<br>`![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false);` | +| Input-output memory management unit (IOMMU) | IOMMU translates virtual to physical memory addresses for Direct memory access-capable device peripherals. IOMMU protects sensitive memory regions. | `// IOMMU`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="iommuEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_IOMMU_REQUIRED")));`<br>`c:[type=="iommuEnabledSet", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=ContainsOnlyValue(c.value, true));`<br>`![type=="iommuEnabled", issuer=="AttestationPolicy"] => issue(type="iommuEnabled", value=false);` | +| PCR value evaluation | PCRs contain measurements of components that are made during the boot. These measurements can be used to verify the components against golden or known measurements. | `//PCRS are only read-only and thus cannot be used with issue operation, but they can be used to validate expected/golden measurements.`<br>`c:[type == "pcrs", issuer=="AttestationService"] && c1:[type=="pcrMatchesExpectedValue", value==JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA1 \| @[0] == `\"KCk6Ow\"`"))] => issue(claim = c1);` | +| Boot Manager version | The security version number of the Boot Manager that was loaded during initial boot on the attested device. | `// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVN`<br>`// Find the first EV_SEPARATOR in PCR 12, 13, Or 14`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));`<br>`c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));`<br>`[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");`<br>`// Find the first EVENT_APPLICATION_SVN. That value is the Boot Manager SVNc:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] => add(type="bootMgrSvnSeqQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN] \| @[0].EventSeq"));`<br>`c1:[type=="bootMgrSvnSeqQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="bootMgrSvnSeq", value=JmesPath(c2.value, c1.value));`<br>`c:[type=="bootMgrSvnSeq", value!="null", issuer=="AttestationPolicy"] => add(type="bootMgrSvnQuery", value=AppendString(AppendString("Events[? EventSeq == `", c.value), "`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));`<br>`c1:[type=="bootMgrSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootMgrSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` | +| Safe mode | Safe mode is a troubleshooting option for Windows that starts your computer in a limited state. Only the basic files and drivers necessary to run Windows are started. | `// Safe mode`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="safeModeEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_SAFEMODE")));`<br>`c:[type=="safeModeEnabledSet", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=ContainsOnlyValue(c.value, false));`<br>`![type=="notSafeMode", issuer=="AttestationPolicy"] => issue(type="notSafeMode", value=true);` | +| WinPE boot | Windows pre-installation Environment (Windows PE) is a minimal operating system with limited services that's used to prepare a computer for Windows installation, to copy disk images from a network file server, and to initiate Windows setup. | `// Win PE`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `19` \|\| PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY"));`<br>`c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="winPEEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "\[\*\].EVENT_WINPE")));`<br>`c:[type=="winPEEnabledSet", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=ContainsOnlyValue(c.value, false));`<br>`![type=="notWinPE", issuer=="AttestationPolicy"] => issue(type="notWinPE", value=true);` | +| Code integrity (CI) policy | Hash of CI policy that's controlling the security of the boot environment. | `// CI Policy`<br>`c :[type=="events", issuer=="AttestationService"] => issue(type="codeIntegrityPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_SI_POLICY[].RawData")));`| +| Secure Boot Configuration Policy Hash (SBCPHash) | SBCPHash is the fingerprint of the Custom SBCP that was loaded during boot in Windows devices, except PCs. | `// Secure Boot Custom Policy`<br>`c:[type=="events", issuer=="AttestationService"] => issue(type="secureBootCustomPolicy", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && PcrIndex == `7` && ProcessedData.UnicodeName == 'CurrentPolicy' && ProcessedData.VariableGuid == '77FA9ABD-0359-4D32-BD60-28F4E78F784B'].ProcessedData.VariableData \| @[0]")));` | +| System Guard (DRTM Validation and SMM Levels) | Ensure System Guard has been validated during boot and corresponding System Management Mode Level | ` // Extract the DRTM state auth event. `<br>`// The rule attempts to find the valid DRTM state auth event by applying following conditions:`<br>`// 1. There is only one DRTM state auth event in the events log`<br>`// 2. The EVENT_DRTM_STATE_AUTH.SignatureValid field in the DRTM state auth event is set to true`<br><br>` c:[type=="events", issuer=="AttestationService"] => add(type="validDrtmStateAuthEvent", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `20` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid != `null`] \| length(@) == `1` && @[0] \| @.{EventSeq:EventSeq, SignatureValid:ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid}"));`<br><br>` // Check if Signature is valid in extracted state auth events`<br>`c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=JsonToClaimValue(JmesPath(c.value, "SignatureValid")));`<br>`![type=="drtmMleValid", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=false);`<br><br>`// Get the sequence number of the DRTM state auth event.`<br>`// The sequence number is used to ensure that the SMM event appears before the last DRTM state auth event.`<br>`[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => add(type="validDrtmStateAuthEventSeq", value=JmesPath(c.value, "EventSeq"));`<br><br>` // Create query for SMM event`<br>`// The query is constructed to find the SMM level from the SMM level event that appears exactly once before the valid DRTM state auth event in the event log`<br>`[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && c:[type=="validDrtmStateAuthEventSeq", issuer=="AttestationPolicy"] => add(type="smmQuery", value=AppendString(AppendString("Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `20` && EventSeq <`", c.value), "`].ProcessedData.EVENT_DRTM_SMM \| length(@) == `1` && @[0] \| @.Value"));`<br><br>`// Extract SMM value`<br>`[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] &&`<br>` c1:[type=="smmQuery", issuer=="AttestationPolicy"] &&`<br>` c2:[type=="events", issuer=="AttestationService"] => issue(type="smmLevel", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));`<br>` ` | +| Boot application subversion | The version of the Boot Manager that's running on the device. | `// Find the first EV_SEPARATOR in PCR 12, 13, Or 14, the ordering of the events is critical to ensure correctness.`<br>`c:[type=="events", issuer=="AttestationService"] => add(type="evSeparatorSeq", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_SEPARATOR' && (PcrIndex == `12` \|\| PcrIndex == `13` \|\| PcrIndex == `14`)] \| @[0].EventSeq"));`<br>`c:[type=="evSeparatorSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value=AppendString(AppendString("Events[? EventSeq < `", c.value), "`"));`<br>`[type=="evSeparatorSeq", value=="null", issuer=="AttestationPolicy"] => add(type="beforeEvSepClause", value="Events[? `true` ");`<br>` // No restriction of EV_SEPARATOR in case it is not present// Find the first EVENT_TRANSFER_CONTROL with value 1 or 2 in PCR 12 that is before the EV_SEPARATORc1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="bootMgrSvnSeq", value != "null", issuer=="AttestationPolicy"] => add(type="beforeEvSepAfterBootMgrSvnClause", value=AppendString(AppendString(AppendString(c1.value, "&& EventSeq >= `"), c2.value), "`"));`<br>`c:[type=="beforeEvSepAfterBootMgrSvnClause", issuer=="AttestationPolicy"] => add(type="tranferControlQuery", value=AppendString(c.value, " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`&& (ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `1` \|\| ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_TRANSFER_CONTROL.Value == `2`)] \| @[0].EventSeq"));`<br>`c1:[type=="tranferControlQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="tranferControlSeq", value=JmesPath(c2.value, c1.value));`<br>`// Find the first non-null EVENT_MODULE_SVN in PCR 13 after the transfer control.c:[type=="tranferControlSeq", value!="null", issuer=="AttestationPolicy"] => add(type="afterTransferCtrlClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));`<br>`c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="afterTransferCtrlClause", issuer=="AttestationPolicy"] => add(type="moduleQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13` && ((ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]) \|\| (ProcessedData.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_MODULE_SVN \| @[0]))].EventSeq \| @[0]"));`<br>`c1:[type=="moduleQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => add(type="moduleSeq", value=JmesPath(c2.value, c1.value));`<br>`// Find the first EVENT_APPLICATION_SVN after EV_EVENT_TAG in PCR 12. That value is Boot App SVNc:[type=="moduleSeq", value!="null", issuer=="AttestationPolicy"] => add(type="applicationSvnAfterModuleClause", value=AppendString(AppendString(" && EventSeq > `", c.value), "`"));`<br>`c1:[type=="beforeEvSepClause", issuer=="AttestationPolicy"] && c2:[type=="applicationSvnAfterModuleClause", issuer=="AttestationPolicy"] => add(type="bootAppSvnQuery", value=AppendString(AppendString(c1.value, c2.value), " && EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `12`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_APPLICATION_SVN \| @[0]"));`<br>`c1:[type=="bootAppSvnQuery", issuer=="AttestationPolicy"] && c2:[type=="events", issuer=="AttestationService"] => issue(type="bootAppSvn", value=JsonToClaimValue(JmesPath(c2.value, c1.value)));` | +| Boot revision list | Boot revision list used to direct the device to an enterprise honeypot to further monitor the device's activities. | `// Boot Rev List Info `<br>`c:[type=="events", issuer=="AttestationService"] => issue(type="bootRevListInfo", value=JsonToClaimValue(JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `13`].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_BOOT_REVOCATION_LIST.RawData \| @[0]")));` | ## Sample policies for TPM attestation using version 1.2 |
attestation | Tpm Attestation Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/tpm-attestation-concepts.md | authorizationrules { issuancerules {-[type=="aikValidated", value==true]&& +[type=="aikValidated", value==true] && [type=="secureBootEnabled", value==true] => issue(type="PlatformAttested", value=true);-} +}; ``` The following example takes advantage of policy version 1.2 to verify details ab ``` version=1.2; -authorizationrules { +authorizationrules { => permit(); }; - issuancerules { // Verify if secure boot is enabled c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"));-c:[type=="efiConfigVariables", issuer="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); +c:[type=="efiConfigVariables", issuer=="AttestationPolicy"]=> add(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => add(type="secureBootEnabled", value=false); // HVCI-c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == 12 || PcrIndex == 19)].ProcessedData.EVENT_TRUSTBOUNDARY")); +c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY")); c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="hvciEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_HVCI_POLICY | @[?String == 'HypervisorEnforcedCodeIntegrityEnable'].Value"))); c:[type=="hvciEnabledSet", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=ContainsOnlyValue(c.value, 1)); ![type=="hvciEnabled", issuer=="AttestationPolicy"] => issue(type="hvciEnabled", value=false); -// System Guard Secure Launch - // Validating unwanted(malicious.sys) driver is not loaded-c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == 12 || PcrIndex == 13 || PcrIndex == 19 || PcrIndex == 20)].ProcessedData.EVENT_TRUSTBOUNDARY")); -c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="MaliciousDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == true && (equals_ignore_case(EVENT_FILEPATH, '\windows\system32\drivers\malicious.sys') || equals_ignore_case(EVENT_FILEPATH, '\windows\system32\drivers\wd\malicious.sys'))] | @ != null"))); +c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `13` || PcrIndex == `19` || PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY")); +c:[type=="boolProperties", issuer=="AttestationPolicy"] => issue(type="MaliciousDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == true && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\malicious.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\malicious.sys'))] | @ != null "))); ![type=="MaliciousDriverLoaded", issuer=="AttestationPolicy"] => issue(type="MaliciousDriverLoaded", value=false); }; ``` +## Extending the protection from malicious boot attacks via Integrity Measurement Architecture(IMA) on Linux ++Linux systems follow a similar boot process to Windows, and with TPM attestation the protection profile can be extended to beyond boot into the kernel as well using Integrity Measurement Architecture(IMA). IMA subsystem was designed to detect if files have been accidentally or maliciously altered, both remotely and locally, it maintains a runtime measurement list and, if anchored in a hardware Trusted Platform Module(TPM), an aggregate integrity value over this list provides the benefit of resiliency from software attacks. Recent enhancements in the IMA subsystem also allow for non file based attributes to be measured and attested remotely. Azure attestation supports non file based measurements to be attested remotely to provide a holistic view of system integrity. ++Enabling IMA with the following ima-policy will enable measurement of non file attributes while still enabling local file integrity attestation. ++Using the following Attestation policy, you can now validate the secureboot, kernel signature, kernel version, kernel cmdline passed in by grub and other key security attributes supported by IMA. ++``` +version = 1.2; ++configurationrules +{ +}; ++authorizationrules +{ + [type == "aikValidated", value==true] + => permit(); +}; ++issuancerules { + // Retrieve all EFI Boot variables with event = 'EV_EFI_VARIABLE_BOOT' + c:[type == "events", issuer=="AttestationService"] => add(type ="efiBootVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_BOOT']")); ++ // Retrieve all EFI Driver Config variables with event = 'EV_EFI_VARIABLE_DRIVER_CONFIG' + c:[type == "events", issuer=="AttestationService"] => add(type ="efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG']")); ++ // Grab all IMA events + c:[type=="events", issuer=="AttestationService"] => add(type="imaMeasurementEvents", value=JmesPath(c.value, "Events[?EventTypeString == 'IMA_MEASUREMENT_EVENT']")); ++ // Look for "Boot Order" from EFI Boot Data + c:[type == "efiBootVariables", issuer=="AttestationPolicy"] => add(type = "bootOrderFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'BootOrder'] | length(@) == `1` && @[0].PcrIndex == `1` && @[0].ProcessedData.VariableData")); + c:[type=="bootOrderFound", issuer=="AttestationPolicy"] => issue(type="bootOrder", value=JsonToClaimValue(c.value)); + ![type=="bootOrderFound", issuer=="AttestationPolicy"] => issue(type="bootOrder", value=0); ++ // Look for "Secure Boot" from EFI Driver Configuration Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData == 'AQ'"))); + ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false); ++ // Look for "Platform Key" from EFI Boot Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "platformKeyFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'PK'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData")); + c:[type=="platformKeyFound", issuer=="AttestationPolicy"] => issue(type="platformKey", value=JsonToClaimValue(c.value)); + ![type=="platformKeyFound", issuer=="AttestationPolicy"] => issue(type="platformKey", value=0); + + // Look for "Key Exchange Key" from EFI Driver Configuration Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "keyExchangeKeyFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'KEK'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData")); + c:[type=="keyExchangeKeyFound", issuer=="AttestationPolicy"] => issue(type="keyExchangeKey", value=JsonToClaimValue(c.value)); + ![type=="keyExchangeKeyFound", issuer=="AttestationPolicy"] => issue(type="keyExchangeKey", value=0); ++ // Look for "Key Database" from EFI Driver Configuration Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "keyDatabaseFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'db'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData")); + c:[type=="keyDatabaseFound", issuer=="AttestationPolicy"] => issue(type="keyDatabase", value=JsonToClaimValue(c.value)); + ![type=="keyDatabaseFound", issuer=="AttestationPolicy"] => issue(type="keyDatabase", value=0); ++ // Look for "Forbidden Signatures" from EFI Driver Configuration Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "forbiddenSignaturesFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'dbx'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData")); + c:[type=="forbiddenSignaturesFound", issuer=="AttestationPolicy"] => issue(type="forbiddenSignatures", value=JsonToClaimValue(c.value)); + ![type=="forbiddenSignaturesFound", issuer=="AttestationPolicy"] => issue(type="forbiddenSignatures", value=0); ++ // Look for "Kernel Version" in IMA Measurement events + c:[type=="imaMeasurementEvents", issuer=="AttestationPolicy"] => add(type="kernelVersionsFound", value=JmesPath(c.value, "[].ProcessedData.KernelVersion")); + c:[type=="kernelVersionsFound", issuer=="AttestationPolicy"] => issue(type="kernelVersions", value=JsonToClaimValue(c.value)); + ![type=="kernelVersionsFound", issuer=="AttestationPolicy"] => issue(type="kernelVersions", value=0); ++ // Look for "Built-In Trusted Keys" in IMA Measurement events + c:[type=="imaMeasurementEvents", issuer=="AttestationPolicy"] => add(type="builtintrustedkeysFound", value=JmesPath(c.value, "[? ProcessedData.Keyring == '.builtin_trusted_keys'].ProcessedData.CertificateSubject")); + c:[type=="builtintrustedkeysFound", issuer=="AttestationPolicy"] => issue(type="builtintrustedkeys", value=JsonToClaimValue(c.value)); + ![type=="builtintrustedkeysFound", issuer=="AttestationPolicy"] => issue(type="builtintrustedkeys", value=0); +}; ++``` ++Note: Support for non-file based measurements are only available from linux kernel version: 5.15 ++## TPM Key attestation support ++Numerous applications rely on foundational credential management of keys and certs for protections against credential theft, and one of main ways of ensuring the credential security is the reliance of key storage providers that provide additional security from malware and attacks. Windows implements various cryptographic providers that can be either software or hardware based. ++The two most important ones are: ++* Microsoft Software Key Storage Provider: Standard provider, which stores keys software based and supports CNG (Crypto-Next Generation) ++* Microsoft Platform Crypto Provider: Hardware based which stores keys on a TPM (trusted platform module) and supports CNG as well ++Whenever a Storage provider is used, itΓÇÖs usually to create a pub/priv key pair that is chained to a root of trust. At creation more properties can also be used to enable certain aspects of the key storage, exportability, etc. Key attestation in this context, is the technical ability to prove to a replying party that a private key was generated inside, and is managed inside, and in a not exportable form. Such attestation clubbed with other information can help protect from credential theft and replay type of attack. ++TPMs also provide the capability ability to attest that keys are resident in a TPM, enabling higher security assurance, backed up by non-exportability, anti-hammering, and isolation of keys. A common use case is for applications that issue digital signature certificate for subscriber keys, verifying that the subscribers private signature key is generated and managed in an approved TPM. +One can easily attest to the fact the keys are resident in a valid TPM with appropriate Nonexportability flags using a policy as below. ++``` +version=1.2; ++authorizationrules +{ + => permit(); +}; ++issuancerules +{ + // Key Attest Policy + // -- Validating key types + c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyType", value=JsonToClaimValue(JmesPath(c.value, "jwk.kty"))); + c:[type=="x-ms-tpm-other-keys", issuer=="AttestationService"] => add(type="otherKeysTypes", value=JsonToClaimValue(JmesPath(c.value, "[*].jwk.kty"))); + c:[type=="requestKeyType", issuer=="AttestationPolicy", value=="RSA"] => issue(type="requestKeyType", value="RSA"); + c:[type=="otherKeysTypes", issuer=="AttestationPolicy", value=="RSA"] => issue(type="otherKeysTypes", value="RSA"); ++ // -- Validating tpm_quote attributes + c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyQuote", value=JmesPath(c.value, "info.tpm_quote")); + c:[type=="requestKeyQuote", issuer=="AttestationPolicy"] => add(type="requestKeyQuoteHashAlg", value=JsonToClaimValue(JmesPath(c.value, "hash_alg"))); + c:[type=="requestKeyQuoteHashAlg", issuer=="AttestationPolicy", value=="sha-256"] => issue(type="requestKeyQuoteHashAlg", value="sha-256"); ++ // -- Validating tpm_certify attributes + c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyCertify", value=JmesPath(c.value, "info.tpm_certify")); + c:[type=="requestKeyCertify", issuer=="AttestationPolicy"] => add(type="requestKeyCertifyNameAlg", value=JsonToClaimValue(JmesPath(c.value, "name_alg"))); + c:[type=="requestKeyCertifyNameAlg", issuer=="AttestationPolicy", value==11] => issue(type="requestKeyCertifyNameAlg", value=11); ++ c:[type=="requestKeyCertify", issuer=="AttestationPolicy"] => add(type="requestKeyCertifyObjAttr", value=JsonToClaimValue(JmesPath(c.value, "obj_attr"))); + c:[type=="requestKeyCertifyObjAttr", issuer=="AttestationPolicy", value==50] => issue(type="requestKeyCertifyObjAttr", value=50); ++ c:[type=="requestKeyCertify", issuer=="AttestationPolicy"] => add(type="requestKeyCertifyAuthPolicy", value=JsonToClaimValue(JmesPath(c.value, "auth_policy"))); + c:[type=="requestKeyCertifyAuthPolicy", issuer=="AttestationPolicy", value=="AQIDBA"] => issue(type="requestKeyCertifyAuthPolicy", value="AQIDBA"); ++ c:[type=="x-ms-tpm-other-keys", issuer=="AttestationService"] => add(type="otherKeysCertify", value=JmesPath(c.value, "[*].info.tpm_certify")); + c:[type=="otherKeysCertify", issuer=="AttestationPolicy"] => add(type="otherKeysCertifyNameAlgs", value=JsonToClaimValue(JmesPath(c.value, "[*].name_alg"))); + c:[type=="otherKeysCertifyNameAlgs", issuer=="AttestationPolicy", value==11] => issue(type="otherKeysCertifyNameAlgs", value=11); ++ c:[type=="otherKeysCertify", issuer=="AttestationPolicy"] => add(type="otherKeysCertifyObjAttr", value=JsonToClaimValue(JmesPath(c.value, "[*].obj_attr"))); + c:[type=="otherKeysCertifyObjAttr", issuer=="AttestationPolicy", value==50] => issue(type="otherKeysCertifyObjAttr", value=50); ++ c:[type=="otherKeysCertify", issuer=="AttestationPolicy"] => add(type="otherKeysCertifyAuthPolicy", value=JsonToClaimValue(JmesPath(c.value, "[*].auth_policy"))); + c:[type=="otherKeysCertifyAuthPolicy", issuer=="AttestationPolicy", value=="AQIDBA"] => issue(type="otherKeysCertifyAuthPolicy", value="AQIDBA"); +}; ++``` + ## Next steps +- [Try out TPM attestation](azure-tpm-vbs-attestation-usage.md) - [Device Health Attestation on Windows and interacting with Azure Attestation](/windows/client-management/mdm/healthattestation-csp#windows-11-device-health-attestation) - [Learn more about claim rule grammar](claim-rule-grammar.md) - [Attestation policy claim rule functions](claim-rule-functions.md) |
attestation | Tpm Attestation Sample Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/tpm-attestation-sample-policies.md | + + Title: Examples of an Azure TPM Attestation policy +description: Examples of Azure Attestation policy for TPM endpoint. ++++ Last updated : 10/12/2022+++++# Examples of an attestation policy for TPM endpoint ++Attestation policy is used to process the attestation evidence and determine whether Azure Attestation will issue an attestation token. Attestation token generation can be controlled with custom policies. Below are some examples of an attestation policy. ++## Sample policy for TPM using Policy version 1.0 ++``` +version=1.0; ++authorizationrules { + => permit(); +}; ++issuancerules +{ +[type=="aikValidated", value==true]&& +[type=="secureBootEnabled", value==true] && +[type=="bootDebuggingDisabled", value==true] && +[type=="vbsEnabled", value==true] && +[type=="notWinPE", value==true] && +[type=="notSafeMode", value==true] => issue(type="PlatformAttested", value=true); +}; +``` ++A simple TPM attestation policy that can be used to verify minimal aspects of the boot. ++## Sample policy for TPM using Policy version 1.2 ++The policy uses the TPM version to restrict attestation calls. The issuancerules looks at various properties measured during boot. ++``` +version=1.2; ++configurationrules{ +}; ++authorizationrules { + => permit(); +}; ++issuancerules{ ++c:[type == "aikValidated", issuer=="AttestationService"] =>issue(type="aikValidated", value=c.value); ++// SecureBoot enabled +c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); +c:[type == "efiConfigVariables", issuer=="AttestationPolicy"]=> issue(type = "SecureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); +![type=="SecureBootEnabled", issuer=="AttestationPolicy"] => issue(type="SecureBootEnabled", value=false); ++// Retrieve bool properties Code integrity +c:[type=="events", issuer=="AttestationService"] => add(type="boolProperties", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `13` || PcrIndex == `19` || PcrIndex == `20`)].ProcessedData.EVENT_TRUSTBOUNDARY")); +c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="codeIntegrityEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_CODEINTEGRITY"))); +c:[type=="codeIntegrityEnabledSet", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=ContainsOnlyValue(c.value, true)); +![type=="CodeIntegrityEnabled", issuer=="AttestationPolicy"] => issue(type="CodeIntegrityEnabled", value=false); ++// Bitlocker Boot Status, The first non zero measurement or zero. +c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY")); +c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="BitlockerStatus", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BITLOCKER_UNLOCK | @[? Value != `0`].Value | @[0]"))); +[type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=true); +![type=="BitlockerStatus", issuer=="AttestationPolicy"] => issue(type="BitlockerStatus", value=false); ++// Elam Driver (windows defender) Loaded +c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="elamDriverLoaded", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_LOADEDMODULE_AGGREGATION[] | [? EVENT_IMAGEVALIDATED == `true` && (equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wdboot.sys') || equals_ignore_case(EVENT_FILEPATH, '\\windows\\system32\\drivers\\wd\\wdboot.sys'))] | @ != `null`"))); +[type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=true); +![type=="elamDriverLoaded", issuer=="AttestationPolicy"] => issue(type="ELAMDriverLoaded", value=false); ++}; ++``` +### Attestation policy to authorize only those TPMs that match known PCR hashes. ++``` +version=1.2; ++authorizationrules +{ + c:[type == "pcrs", issuer=="AttestationService"] => add(type="pcr0Validated", value=JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `0`].Digests.SHA256 | @[0] =='4c833b1c361fceffd8dc0f81eec76081b71e1a0eb2193caed0b6e1c7735ec33e' "))); + c:[type == "pcrs", issuer=="AttestationService"] => add(type="pcr1Validated", value=JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `1`].Digests.SHA256 | @[0] =='8c25e3be6ad6f5bd33c9ae40d5d5461e88c1a7366df0c9ee5c7e5ff40d3d1d0e' "))); + c:[type == "pcrs", issuer=="AttestationService"] => add(type="pcr2Validated", value=JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `2`].Digests.SHA256 | @[0] =='3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969' "))); + c:[type == "pcrs", issuer=="AttestationService"] => add(type="pcr3Validated", value=JsonToClaimValue(JmesPath(c.value, "PCRs[? Index == `3`].Digests.SHA256 | @[0] =='3d458cfe55cc03ea1f443f1562beec8df51c75e14a9fcf9a7234a13f198e7969' "))); + + [type=="pcr0Validated", value==true] && + [type=="pcr1Validated", value==true] && + [type=="pcr2Validated", value==true] && + [type=="pcr3Validated", value==true] => permit(); +}; ++issuancerules +{ +}; +``` ++### Attestation policy to validate System Guard is enabled as expected and has been validated for its state. ++``` +version = 1.2; ++authorizationrules +{ + => permit(); +}; ++issuancerules +{ + // Extract the DRTM state auth event + // The rule attempts to find the valid DRTM state auth event by applying following conditions: + // 1. There is only one DRTM state auth event in the events log + // 2. The EVENT_DRTM_STATE_AUTH.SignatureValid field in the DRTM state auth event is set to true + c:[type=="events", issuer=="AttestationService"] => add(type="validDrtmStateAuthEvent", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `20` && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid != `null`] | length(@) == `1` && @[0] | @.{EventSeq:EventSeq, SignatureValid:ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid}")); ++ // Check if Signature is valid in extracted state auth events + c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=JsonToClaimValue(JmesPath(c.value, "SignatureValid"))); + ![type=="drtmMleValid", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=false); ++ // Get the sequence number of the DRTM state auth event. + // The sequence number is used to ensure that the SMM event appears before the last DRTM state auth event. + [type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && + c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => add(type="validDrtmStateAuthEventSeq", value=JmesPath(c.value, "EventSeq")); ++ // Create query for SMM event + // The query is constructed to find the SMM level from the SMM level event that appears exactly once before the valid DRTM state auth event in the event log + [type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && + c:[type=="validDrtmStateAuthEventSeq", issuer=="AttestationPolicy"] => add(type="smmQuery", value=AppendString(AppendString("Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == `20` && EventSeq < `", c.value), "`].ProcessedData.EVENT_DRTM_SMM | length(@) == `1` && @[0] | @.Value")); ++ // Extract SMM value + [type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && + c1:[type=="smmQuery", issuer=="AttestationPolicy"] && + c2:[type=="events", issuer=="AttestationService"] => issue(type="smmLevel", value=JsonToClaimValue(JmesPath(c2.value, c1.value))); +}; +``` +++### Attestation policy to validate VBS enclave for its validity and identity. ++``` +version=1.2; ++authorizationrules { + [type=="vsmReportPresent", value==true] && + [type=="enclaveAuthorId", value=="AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"] && + [type=="enclaveImageId", value=="AQEAAAAAAAAAAAAAAAAAAA"] && + [type=="enclaveSvn", value>=0] => permit(); +}; ++issuancerules +{ +}; +``` ++### Attestation policy to issue all incoming claims produced by the service. ++``` +version = 1.2; ++configurationrules +{ +}; ++authorizationrules +{ + => permit(); +}; ++issuancerules +{ + c:[type=="bootEvents", issuer=="AttestationService"] => issue(type="outputBootEvents", value=c.value); + c:[type=="events", issuer=="AttestationService"] => issue(type="outputEvents", value=c.value); +}; +``` ++### Attestation policy to produce some critical security evaluation claims for Windows. ++``` +version=1.2; ++authorizationrules { + => permit(); +}; ++issuancerules{ ++// SecureBoot enabled +c:[type == "events", issuer=="AttestationService"] => add(type = "efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' && ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']")); c:[type == "efiConfigVariables", issuer=="AttestationPolicy"]=> issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].ProcessedData.VariableData == 'AQ'"))); ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false); ++// Boot debugging +c:[type=="boolProperties", issuer=="AttestationPolicy"] => add(type="bootDebuggingEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_BOOTDEBUGGING"))); c:[type=="bootDebuggingEnabledSet", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=ContainsOnlyValue(c.value, false)); ![type=="bootDebuggingDisabled", issuer=="AttestationPolicy"] => issue(type="bootDebuggingDisabled", value=false); ++// Virtualization Based Security enabled +c:[type=="events", issuer=="AttestationService"] => add(type="srtmDrtmEventPcr", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == `12` || PcrIndex == `19`)].ProcessedData.EVENT_TRUSTBOUNDARY")); c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vbsEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_VSM_REQUIRED"))); c:[type=="srtmDrtmEventPcr", issuer=="AttestationPolicy"] => add(type="vbsEnabledSet", value=JsonToClaimValue(JmesPath(c.value, "[*].EVENT_VBS_MANDATORY_ENFORCEMENT"))); c:[type=="vbsEnabledSet", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=ContainsOnlyValue(c.value, true)); ![type=="vbsEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=false); c:[type=="vbsEnabled", issuer=="AttestationPolicy"] => issue(type="vbsEnabled", value=c.value); ++// System Guard and SMM value +c:[type=="events", issuer=="AttestationService"] => add(type="validDrtmStateAuthEvent", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == '20' && ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid !=null] | length(@) == '1' && @[0] | @.{EventSeq:EventSeq, SignatureValid:ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRTM_STATE_AUTH.SignatureValid}")); ++// Check if Signature is valid in extracted state auth events +c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=JsonToClaimValue(JmesPath(c.value, "SignatureValid"))); +![type=="drtmMleValid", issuer=="AttestationPolicy"] => issue(type="drtmMleValid", value=false); ++// Get the sequence number of the DRTM state auth event. +// The sequence number is used to ensure that the SMM event appears before the last DRTM state auth event. +[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && c:[type=="validDrtmStateAuthEvent", issuer=="AttestationPolicy"] => add(type="validDrtmStateAuthEventSeq", value=JmesPath(c.value, "EventSeq")); ++// Create query for SMM event +// The query is constructed to find the SMM level from the SMM level event that appears exactly once before the valid DRTM state auth event in the event log +[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && c:[type=="validDrtmStateAuthEventSeq", issuer=="AttestationPolicy"] => add(type="smmQuery", value=AppendString(AppendString("Events[? EventTypeString == 'EV_EVENT_TAG' && PcrIndex == 20 && EventSeq <", c.value), "].ProcessedData.EVENT_DRTM_SMM | length(@) == 1 && @[0] | @.Value")); ++// Extract SMM value +[type=="drtmMleValid", value==true, issuer=="AttestationPolicy"] && +c1:[type=="smmQuery", issuer=="AttestationPolicy"] && +c2:[type=="events", issuer=="AttestationService"] => issue(type="smmLevel", value=JsonToClaimValue(JmesPath(c2.value, c1.value))); ++}; +``` ++### Attestation policy to validate boot related firmware and early boot driver signers on linux ++``` +version = 1.2; ++configurationrules +{ +}; ++authorizationrules +{ + [type == "aikValidated", value==true] + => permit(); +}; ++issuancerules { + // Retrieve all EFI Boot variables with event = 'EV_EFI_VARIABLE_BOOT' + c:[type == "events", issuer=="AttestationService"] => add(type ="efiBootVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_BOOT']")); ++ // Retrieve all EFI Driver Config variables with event = 'EV_EFI_VARIABLE_DRIVER_CONFIG' + c:[type == "events", issuer=="AttestationService"] => add(type ="efiConfigVariables", value = JmesPath(c.value, "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG']")); ++ // Grab all IMA events + c:[type=="events", issuer=="AttestationService"] => add(type="imaMeasurementEvents", value=JmesPath(c.value, "Events[?EventTypeString == 'IMA_MEASUREMENT_EVENT']")); ++ // Look for "Boot Order" from EFI Boot Data + c:[type == "efiBootVariables", issuer=="AttestationPolicy"] => add(type = "bootOrderFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'BootOrder'] | length(@) == `1` && @[0].PcrIndex == `1` && @[0].ProcessedData.VariableData")); + c:[type=="bootOrderFound", issuer=="AttestationPolicy"] => issue(type="bootOrder", value=JsonToClaimValue(c.value)); + ![type=="bootOrderFound", issuer=="AttestationPolicy"] => issue(type="bootOrder", value=0); ++ // Look for "Secure Boot" from EFI Driver Configuration Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => issue(type = "secureBootEnabled", value = JsonToClaimValue(JmesPath(c.value, "[?ProcessedData.UnicodeName == 'SecureBoot'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData == 'AQ'"))); + ![type=="secureBootEnabled", issuer=="AttestationPolicy"] => issue(type="secureBootEnabled", value=false); ++ // Look for "Platform Key" from EFI Boot Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "platformKeyFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'PK'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData")); + c:[type=="platformKeyFound", issuer=="AttestationPolicy"] => issue(type="platformKey", value=JsonToClaimValue(c.value)); + ![type=="platformKeyFound", issuer=="AttestationPolicy"] => issue(type="platformKey", value=0); + + // Look for "Key Exchange Key" from EFI Driver Configuration Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "keyExchangeKeyFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'KEK'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData")); + c:[type=="keyExchangeKeyFound", issuer=="AttestationPolicy"] => issue(type="keyExchangeKey", value=JsonToClaimValue(c.value)); + ![type=="keyExchangeKeyFound", issuer=="AttestationPolicy"] => issue(type="keyExchangeKey", value=0); ++ // Look for "Key Database" from EFI Driver Configuration Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "keyDatabaseFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'db'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData")); + c:[type=="keyDatabaseFound", issuer=="AttestationPolicy"] => issue(type="keyDatabase", value=JsonToClaimValue(c.value)); + ![type=="keyDatabaseFound", issuer=="AttestationPolicy"] => issue(type="keyDatabase", value=0); ++ // Look for "Forbidden Signatures" from EFI Driver Configuration Data + c:[type == "efiConfigVariables", issuer=="AttestationPolicy"] => add(type = "forbiddenSignaturesFound", value = JmesPath(c.value, "[?ProcessedData.UnicodeName == 'dbx'] | length(@) == `1` && @[0].PcrIndex == `7` && @[0].ProcessedData.VariableData")); + c:[type=="forbiddenSignaturesFound", issuer=="AttestationPolicy"] => issue(type="forbiddenSignatures", value=JsonToClaimValue(c.value)); + ![type=="forbiddenSignaturesFound", issuer=="AttestationPolicy"] => issue(type="forbiddenSignatures", value=0); ++ // Look for "Kernel Version" in IMA Measurement events + c:[type=="imaMeasurementEvents", issuer=="AttestationPolicy"] => add(type="kernelVersionsFound", value=JmesPath(c.value, "[].ProcessedData.KernelVersion")); + c:[type=="kernelVersionsFound", issuer=="AttestationPolicy"] => issue(type="kernelVersions", value=JsonToClaimValue(c.value)); + ![type=="kernelVersionsFound", issuer=="AttestationPolicy"] => issue(type="kernelVersions", value=0); ++ // Look for "Built-In Trusted Keys" in IMA Measurement events + c:[type=="imaMeasurementEvents", issuer=="AttestationPolicy"] => add(type="builtintrustedkeysFound", value=JmesPath(c.value, "[? ProcessedData.Keyring == '.builtin_trusted_keys'].ProcessedData.CertificateSubject")); + c:[type=="builtintrustedkeysFound", issuer=="AttestationPolicy"] => issue(type="builtintrustedkeys", value=JsonToClaimValue(c.value)); + ![type=="builtintrustedkeysFound", issuer=="AttestationPolicy"] => issue(type="builtintrustedkeys", value=0); +}; ++``` +### Attestation policy to issue the list of drivers loaded during boot. ++``` +version = 1.2; ++configurationrules +{ +}; ++authorizationrules +{ + => permit(); +}; ++issuancerules { ++c:[type=="events", issuer=="AttestationService"] => issue(type="alldriverloads", value=JmesPath(c.value, "Events[? EventTypeString == 'EV_EVENT_TAG' ].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_LOADEDMODULE_AGGREGATION[].EVENT_FILEPATH")); ++c:[type=="events", issuer=="AttestationService"] => issue(type="DriverLoadPolicy", value=JmesPath(c.value, "events[? EventTypeString == 'EV_EVENT_TAG' && (PcrIndex == '13')].ProcessedData.EVENT_TRUSTBOUNDARY.EVENT_DRIVER_LOAD_POLICY.String")); ++}; ++``` ++### Attestation policy for Key attestation, validating keys and properties of the key. ++``` +version=1.2; ++authorizationrules +{ + // Key Attest Policy + // -- Validating key types + c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyType", value=JsonToClaimValue(JmesPath(c.value, "jwk.kty"))); + c:[type=="requestKeyType", issuer=="AttestationPolicy", value=="RSA"] => issue(type="requestKeyType", value="RSA"); ++ // -- Validating tpm_certify attributes + c:[type=="x-ms-tpm-request-key", issuer=="AttestationService"] => add(type="requestKeyCertify", value=JmesPath(c.value, "info.tpm_certify")); + c:[type=="requestKeyCertify", issuer=="AttestationPolicy"] => add(type="requestKeyCertifyObjAttr", value=JsonToClaimValue(JmesPath(c.value, "obj_attr"))); + c:[type=="requestKeyCertifyObjAttr", issuer=="AttestationPolicy", value==50] => issue(type="requestKeyCertifyObjAttrVerified", value=true); + + c:[type=="requestKeyCertifyObjAttrVerified", issuer=="AttestationPolicy" , value==true] => permit(); ++}; ++issuancerules +{ + +}; +``` |
attestation | Virtualization Based Security Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/virtualization-based-security-protocol.md | Title: Virtualization-based Security (VBS) protocol for Azure Attestation + Title: Virtualization-based security (VBS) protocol for Azure Attestation description: VBS attestation protocol VBS enclaves require a TPM to provide the measurement to validate the security f ## Protocol messages +The protocol has 2 messages exchanges: +* Init Message +* Request Message + ### Init message+Message to establish context for the request message. #### Direction Azure Attestation -> Client **service_context** (BASE64URL(OCTETS)): Opaque context created by the service. -### Request message +## Request Message +Payload containing the data that is to be attested by the attestation service. ++Note: Support for IMA measurement logs and Keys has been added to Request message, and can be found in the Request Message V2 section ++## Request message v1 #### Direction BASE64URL(JWS Payload) || '.' || BASE64URL(JWS Signature) -##### JWS Protected Header +##### JWS protected header ``` { BASE64URL(JWS Signature) } ``` -##### JWS Payload +##### JWS payload JWS payload can be of type basic or VBS. Basic is used when attestation evidence does not include VBS data. Azure Attestation -> Client **report** (JWT): The attestation report in JSON Web Token (JWT) format (RFC 7519). +## Request message v2 +++``` +{ + "request": "<JWS>" +} +``` ++**request** (JWS): Request consists of a JSON Web Signature (JWS) structure. The JWS Protected Header and JWS Payload are shown below. As in any JWS structure, the final value consists of: +BASE64URL(UTF8(JWS Protected Header)) || '.' || +BASE64URL(JWS Payload) || '.' || +BASE64URL(JWS Signature) ++##### JWS protected header +``` +{ + "alg": "PS256", + "typ": "attReqV2" + // no "kid" parameter as the key specified by request_key MUST sign this JWS to prove possession. +} +``` ++#### JWS payload ++JWS payload can be of type basic or vsm. Basic is used when attestation evidence does not include VSM data. ++Basic example: ++``` +{ + "att_type": "basic", + "att_data": { + "rp_id": "<URL>", + "rp_data": "<BASE64URL(RPCUSTOMDATA)>", + "challenge": "<BASE64URL(CHALLENGE)>", + "tpm_att_data": { + "current_attestation": { + "logs": [ + { + "type": "TCG", + "log": "<BASE64URL(CURRENT_LOG1)>" + }, + { + "type": "TCG", + "log": "<BASE64URL(CURRENT_LOG2)>" + }, + { + "type": "TCG", + "log": "<BASE64URL(CURRENT_LOG3)>" + } + ], + "aik_cert": "<BASE64URL(AIKCERTIFICATE)>", + // aik_pub is represented as a JSON Web Key (JWK) object (RFC 7517). + "aik_pub": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "pcrs": [ + { + "algorithm": 4, // TPM_ALG_SHA1 + "values": [ + { + "index": 0, + "digest": "<BASE64URL(DIGEST)>" + }, + { + "index": 5, + "digest": "<BASE64URL(DIGEST)>" + } + ] + }, + { + "algorithm": 11, // TPM_ALG_SHA256 + "values": [ + { + "index": 2, + "digest": "<BASE64URL(DIGEST)>" + }, + { + "index": 1, + "digest": "<BASE64URL(DIGEST)>" + } + ] + } + ], + "quote": "<BASE64URL(TPMS_ATTEST)>", + "signature": "<BASE64URL(TPMT_SIGNATURE)>" + }, + "boot_attestation": { + "logs": [ + { + "type": "TCG", + "log": "<BASE64URL(BOOT_LOG1)>" + }, + { + "type": "TCG", + "log": "<BASE64URL(BOOT_LOG2)>" + } + ], + "aik_cert": "<BASE64URL(AIKCERTIFICATE)>", + // aik_pub is represented as a JSON Web Key (JWK) object (RFC 7517). + "aik_pub": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "pcrs": [ + { + "algorithm": 4, // TPM_ALG_SHA1 + "values": [ + { + "index": 0, + "digest": "<BASE64URL(DIGEST)>" + }, + { + "index": 5, + "digest": "<BASE64URL(DIGEST)>" + } + ] + }, + { + "algorithm": 11, // TPM_ALG_SHA256 + "values": [ + { + "index": 2, + "digest": "<BASE64URL(DIGEST)>" + }, + { + "index": 1, + "digest": "<BASE64URL(DIGEST)>" + } + ] + } + ], + "quote": "<BASE64URL(TPMS_ATTEST)>", + "signature": "<BASE64URL(TPMT_SIGNATURE)>" + } + }, + "request_key": { + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "info": { + "tpm_quote": { + "hash_alg": "sha-256" + } + } + }, + "other_keys": [ + { + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "info": { + "tpm_certify": { + "public": "<BASE64URL(TPMT_PUBLIC)>", + "certification": "<BASE64URL(TPMS_ATTEST)>", + "signature": "<BASE64URL(TPMT_SIGNATURE)>" + } + } + }, + { + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + } + } + ], + "custom_claims": [ + { + "name": "<name>", + "value": "<value>", + "value_type": "<value_type>" + }, + { + "name": "<name>", + "value": "<value>", + "value_type": "<value_type>" + } + ], + "service_context": "<BASE64URL(SERVICECONTEXT)>" + } +} +``` ++TPM + VBS enclave example: +``` +{ + "att_type": "vbs", + "att_data": { + "report_signed": { + "rp_id": "<URL>", + "rp_data": "<BASE64URL(RPCUSTOMDATA)>", + "challenge": "<BASE64URL(CHALLENGE)>", + "tpm_att_data": { + "current_attestation": { + "logs": [ + { + "type": "TCG", + "log": "<BASE64URL(CURRENT_LOG1)>" + }, + { + "type": "TCG", + "log": "<BASE64URL(CURRENT_LOG2)>" + }, + { + "type": "TCG", + "log": "<BASE64URL(CURRENT_LOG3)>" + } + ], + "aik_cert": "<BASE64URL(AIKCERTIFICATE)>", + // aik_pub is represented as a JSON Web Key (JWK) object (RFC 7517). + "aik_pub": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "pcrs": [ + { + "algorithm": 4, // TPM_ALG_SHA1 + "values": [ + { + "index": 0, + "digest": "<BASE64URL(DIGEST)>" + }, + { + "index": 5, + "digest": "<BASE64URL(DIGEST)>" + } + ] + }, + { + "algorithm": 11, // TPM_ALG_SHA256 + "values": [ + { + "index": 2, + "digest": "<BASE64URL(DIGEST)>" + }, + { + "index": 1, + "digest": "<BASE64URL(DIGEST)>" + } + ] + } + ], + "quote": "<BASE64URL(TPMS_ATTEST)>", + "signature": "<BASE64URL(TPMT_SIGNATURE)>" + }, + "boot_attestation": { + "logs": [ + { + "type": "TCG", + "log": "<BASE64URL(BOOT_LOG1)>" + }, + { + "type": "TCG", + "log": "<BASE64URL(BOOT_LOG2)>" + } + ], + "aik_cert": "<BASE64URL(AIKCERTIFICATE)>", + // aik_pub is represented as a JSON Web Key (JWK) object (RFC 7517). + "aik_pub": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "pcrs": [ + { + "algorithm": 4, // TPM_ALG_SHA1 + "values": [ + { + "index": 0, + "digest": "<BASE64URL(DIGEST)>" + }, + { + "index": 5, + "digest": "<BASE64URL(DIGEST)>" + } + ] + }, + { + "algorithm": 11, // TPM_ALG_SHA256 + "values": [ + { + "index": 2, + "digest": "<BASE64URL(DIGEST)>" + }, + { + "index": 1, + "digest": "<BASE64URL(DIGEST)>" + } + ] + } + ], + "quote": "<BASE64URL(TPMS_ATTEST)>", + "signature": "<BASE64URL(TPMT_SIGNATURE)>" + } + }, + "request_key": { + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "info": { + "tpm_quote": { + "hash_alg": "sha-256" + } + } + }, + "other_keys": [ + { + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "info": { + "tpm_certify": { + "public": "<BASE64URL(TPMT_PUBLIC)>", + "certification": "<BASE64URL(TPMS_ATTEST)>", + "signature": "<BASE64URL(TPMT_SIGNATURE)>" + } + } + }, + { + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + } + } + ], + "custom_claims": [ + { + "name": "<name>", + "value": "<value>", + "value_type": "<value_type>" + }, + { + "name": "<name>", + "value": "<value>", + "value_type": "<value_type>" + } + ], + "service_context": "<BASE64URL(SERVICECONTEXT)>" + }, + "vsm_report": { + "enclave": { + "report": "<BASE64URL(REPORT)>" + } + } + } +} +``` ++**rp_id** (StringOrURI): Relying party identifier. Used by the service in the computation of the machine ID claim. ++**rp_data** (BASE64URL(OCTETS)): Opaque data passed by the relying party. This is normally used by the relying party as a nonce to guarantee freshness of the report. ++**challenge** (BASE64URL(OCTETS)): Random value issued by the service. ++- ***current_attestation*** (Object): Contains logs and TPM quote for the current state of the system (either boot or resume). The nonce received from the service must be passed to the TPM2_Quote command in the 'qualifyingData' parameter. ++- ***boot_attestation*** (Object): This is optional and contains logs and the TPM quote saved before the system hibernated and resumed. boot_attestation info must be associated with the same cold boot cycle (i.e. the system was only hibernated and resumed between them). ++- ***logs*** (Array(Object)): Array of logs. Each element of the array contains a log and the array must be in the order used for measurements. ++++++- ***aik_cert*** (BASE64URL(OCTETS)): The X.509 certificate representing the AIK. ++- ***aik_pub*** (JWK): The public part of the AIK represented as a JSON Web Key (JWK) object (RFC 7517). ++- ***pcrs*** (Array(Object)): Contains the set quoted. Each element of the array represents a PCR bank and the array must be in the order used to create the quote. A PCR bank is defined by its algorithm and its values (only the values quoted should be in the list). ++++++++++++++**vsm_report** (VSM Report Object): The VSM attestation report. See the VSM REPORT OBJECT section. ++**request_key** (Key object): Key used to sign the request. If a TPM is present (request contains TPM quote), request_key must either be bound to the TPM via quote or be resident in the TPM (see KEY OBJECT). ++**other_keys** (Array(Key object)): Array of keys to be sent to the service. Maximum of 2 keys. ++**custom_claims** (Array(Object)): Array of custom enclave claims sent to the service that can be evaluated by the policy. ++- ***name*** (String): Name of the claim. This name will be appended to a url determined by the Attestation Service (to avoid conflicts) and the concatenated string becomes the type of the claim that can be used in the policy. ++- ***value*** (String): Value of the claim. ++- ***value_type*** (String): Data type of the claimΓÇÖs value. ++**service_context** (BASE64URL(OCTETS)): Opaque, encrypted context created by the service which includes, among others, the challenge and an expiration time for that challenge. ++## Key object ++**jwk** (Object): The public part of the key represented as a JSON Web Key (JWK) object (RFC 7517). ++**info** (Object): Extra information about the key. ++No extra information:(Info object can be empty or missing from request) ++ΓÇó Key bound to the TPM via quote: +- ***tpm_quote*** (Object): Data for the TPM quote binding method. +- ***hash_alg*** (String): The algorithm used to create the hash passed to the TPM2_Quote command in the 'qualifyingData' parameter. The hash is computed by HASH[UTF8(jwk) || 0x00 || <OCTETS(service challenge)>]. Note: UTF8(jwk) must be the exact string that will be sent on the wire as the service will compute the hash using the exact string received in the request without modifications. ++>> Note: This binding method cannot be used for keys in the other_keys array. ++ΓÇó Key certified to be resident in the TPM: ++- ***tpm_certify*** (Object): Data for the TPM certification binding method. +ΓÇ£publicΓÇ¥ (BASE64URL(OCTETS)): TPMT_PUBLIC structure representing the public area of the key in the TPM. ++- ***certification*** (BASE64URL(OCTETS)): TPMS_ATTEST returned by the TPM2_Certify command. The challenge received from the service must be passed to the TPM2_Certify command in the 'qualifyingData' parameter. The AIK provided in the request must be used to certify the key. ++- ***signature*** (BASE64URL(OCTETS)): TPMT_SIGNATURE returned by the TPM2_Certify command. The challenge received from the service must be passed to the TPM2_Certify command in the 'qualifyingData' parameter. The AIK provided in the request must be used to certify the key. ++>> Note: When this binding method is used for the request_key, the 'qualifyingData' parameter value passed to the TPM2_Quote command is simply the challenge received from the service. ++Examples: ++Key not bound to the TPM: +``` +{ + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + } +} +``` ++Key bound to the TPM via quote (either resident in a VBS enclave or not): +``` +{ + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "info": { + "tpm_quote": + "hash_alg": "sha-256" + } + } +} +``` ++Key certified to be resident in the TPM: +``` +{ + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "info": { + "tpm_certify": { + "public": "<BASE64URL(TPMT_PUBLIC)>", + "certification": "<BASE64URL(TPMS_ATTEST)>", + "signature": "<BASE64URL(TPMT_SIGNATURE)>" + } + } +} +``` ++## Policy key object ++The policy key object is the version of the key object used as input claims in the policy. It is processed by the service in order to make it more readable and easier to evaluate by policy rules. ++ΓÇó Key not bound to the TPM: +Same as the respective key object. +Example: +``` +{ + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + } +} +``` +ΓÇó Key bound to the TPM via quote (either resident in a VBS enclave or not): +Same as the respective key object. +Example: +``` +{ + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "info": { + "tpm_quote": + "hash_alg": "sha-256" + } + } +} +``` ++ΓÇó Key certified to be resident in the TPM: ++***jwk*** (Object): Same as the respective key object. +***info.tpm_certify*** (Object): +- ***name_alg*** (Integer): UINT16 value representing a hash algorithm defined by the TPM_ALG_ID constants. +- ***obj_attr*** (Integer): UINT32 value representing the attributes of the key object defined by TPMA_OBJECT +- ***auth_policy*** (BASE64URL(OCTETS)): Optional policy for using this key object. ++Example: +``` +{ + "jwk": { + "kty": "RSA", + "n": "<Base64urlUInt(MODULUS)>", + "e": "<Base64urlUInt(EXPONENT)>" + }, + "info": { + "tpm_certify": { + "name_alg": 11, // 0xB (TPM_ALG_SHA256) + "obj_attr": 50, // 0x32 (fixedTPM | fixedParent | sensitiveDataOrigin) + "auth_policy": "<BASE64URL(AUTH_POLICY)>" + } + } +} +``` ++## VBS report object ++### Enclave attestation: +***enclave*** (Object): Data for VSM enclave attestation. +- ***report*** (BASE64URL(OCTETS)): The VSM enclave attestation report as returned by function EnclaveGetAttestationReport. The EnclaveData parameter must be the SHA-512 hash of the value of report_signed (including the opening and closing braces). The hash function input is UTF8(report_signed). ++Examples: ++Enclave attestation: +``` +{ + "enclave": { + "report": "<BASE64URL(REPORT)>" + } +} +``` ++## Report message ++Direction +Attestation Service -> Client ++Payload +``` +{ + "report": "<JWT>" +} +``` ++***report*** (JWT): The attestation report in JSON Web Token (JWT) format (RFC 7519). ++ ## Next steps - [Azure Attestation workflow](workflow.md) |
automation | Automation Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md | In the event when a zone is down, there's no action required by you to recover f ## Supported regions with availability zones See [Regions and Availability Zones in Azure](../availability-zones/az-overview.md) for the Azure regions that have availability zones. -Automation accounts currently support the following regions in preview: +Automation accounts currently support the following regions: - China North 3 - Qatar Central |
automation | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md | This page is updated monthly, so revisit it regularly. If you're looking for ite ## October 2022 +### Public preview of PowerShell 7.2 and Python 3.10 ++Azure Automation now supports runbooks in latest Runtime versions - PowerShell 7.2 and Python 3.10 in public preview. This enables creation and execution of runbooks for orchestration of management tasks. These new runtimes are currently supported only for Cloud jobs in five regions - West Central US, East US, South Africa North, North Europe, Australia, and Southeast. [Learn more](automation-runbook-types.md). + ### Guidance for Disaster Recovery of Azure Automation account -Azure Automation now supports you to build your own disaster recovery strategy to handle a region-wide or zone-wide failure. [Learn more](https://learn.microsoft.com/azure/automation/automation-disaster-recovery). +Build your own disaster recovery strategy to handle a region-wide or zone-wide failure [Learn more](https://learn.microsoft.com/azure/automation/automation-disaster-recovery). ## September 2022 |
azure-arc | Custom Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md | Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes" Previously updated : 10/12/2022 Last updated : 11/01/2022 description: "Use custom locations to deploy Azure PaaS services on Azure Arc-enabled Kubernetes clusters" Optional parameters: To delete a custom location, use the following command: ```azurecli-az customlocation delete -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds> +az customlocation delete -n <customLocationName> -g <resourceGroupName> ``` +Required parameters: ++| Parameter name | Description | +|-|| +| `--name, --n` | Name of the custom location | +| `--resource-group, --g` | Resource group of the custom location | + ## Troubleshooting If custom location creation fails with the error 'Unknown proxy error occurred', it may be due to network policies configured to disallow pod-to-pod internal communication. |
azure-arc | Tutorial Use Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md | To manage GitOps through the Azure CLI or the Azure portal, you need the followi > For new AKS clusters created with ΓÇ£az aks createΓÇ¥, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI run ΓÇ£az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identityΓÇ¥. For more information, refer to [managed identity docs](../../aks/use-managed-identity.md). * Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type.-* Registration of your subscription with the `AKS-ExtensionManager` feature flag. Use the following command: -- ```console - az feature register --namespace Microsoft.ContainerService --name AKS-ExtensionManager - ``` ### Common to both cluster types |
azure-arc | Ssh Arc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md | allowing existing management tools to have a greater impact on Azure Arc-enabled SSH access to Arc-enabled servers provides the following key benefits: - No public IP address or open SSH ports required - Access to Windows and Linux machines+ - Ability to log in as a local user or an [Azure user (Linux only)](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md) - Support for other OpenSSH based tooling with config file support ## Prerequisites SSH access to Arc-enabled servers is currently supported in the following region - Ubuntu Server: Ubuntu Server 16.04 to Ubuntu Server 20.04 ## Getting started-### Register the HybridConnectivity resource provider ++### Install local command line tool +This functionality is currently packaged in an Azure CLI extension and an Azure PowerShell module. +#### [Install Azure CLI extension](#tab/azure-cli) ++```az extension add --name ssh``` + > [!NOTE]-> This is a one-time operation that needs to be performed on each subscription. +> The Azure CLI extension version must be greater than 1.1.0. -Check if the HybridConnectivity resource provider (RP) has been registered: +#### [Install Azure PowerShell module](#tab/azure-powershell) -```az provider show -n Microsoft.HybridConnectivity``` +```Install-Module -Name AzPreview -Scope CurrentUser -Repository PSGallery -Force``` -If the RP has not been registered, run the following: +### Enable functionality on your Arc-enabled server +In order to use the SSH connect feature, you must enable connections on the hybrid agent. -```az provider register -n Microsoft.HybridConnectivity``` +> [!NOTE] +> The following actions must be completed in an elevated terminal session. -This operation can take 2-5 minutes to complete. Before moving on, check that the RP has been registered. +View your current incoming connections: -### Install az CLI extension -This functionality is currently package in an az CLI extension. -In order to install this extension, run: +```azcmagent config list``` -```az extension add --name ssh``` +If you have existing ports, you'll need to include them in the following command. -If you already have the extension installed, it can be updated by running: +To add access to SSH connections, run the following: -```az extension update --name ssh``` +```azcmagent config set incomingconnections.ports 22<,other open ports,...>``` ++If you're using a non-default port for your SSH connection, replace port 22 with your desired port in the previous command. > [!NOTE]-> The Azure CLI extension version must be greater than 1.1.0. +> The following steps will not need to be run for most users. ++### Register the HybridConnectivity resource provider +> [!NOTE] +> This is a one-time operation that needs to be performed on each subscription. ++Check if the HybridConnectivity resource provider (RP) has been registered: ++```az provider show -n Microsoft.HybridConnectivity``` ++If the RP hasn't been registered, run the following: ++```az provider register -n Microsoft.HybridConnectivity``` ++This operation can take 2-5 minutes to complete. Before moving on, check that the RP has been registered. ### Create default connectivity endpoint > [!NOTE] Validate endpoint creation: az rest --method get --uri https://management.azure.com/subscriptions/<subscription>/resourceGroups/<resourcegroup>/providers/Microsoft.HybridCompute/machines/<arc enabled server name>/providers/Microsoft.HybridConnectivity/endpoints/default?api-version=2021-10-06-preview ``` -### Enable functionality on your Arc-enabled server -In order to use the SSH connect feature, you must enable connections on the hybrid agent. --> [!NOTE] -> The following actions must be completed in an elevated terminal session. --View your current incoming connections: --```azcmagent config list``` --If you have existing ports, you will need to include them in the following command. --To add access to SSH connections, run the following: --```azcmagent config set incomingconnections.ports 22<,other open ports,...>``` --> [!NOTE] -> If you are using a non-default port for your SSH connection, replace port 22 with your desired port in the previous command. - ## Examples-To view examples of using the ```az ssh arc``` command, view the az CLI documentation page for [az ssh](/cli/azure/ssh). +To view examples, view the Az CLI documentation page for [az ssh](/cli/azure/ssh) or the Azure PowerShell documentation page for [Az.Ssh](/powershell/module/az.ssh). |
azure-functions | Functions App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md | The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnecti If validation is skipped and either the connection string or content share are not valid, the app will be unable to start properly and will only serve HTTP 500 errors. +## WEBSITE\_SLOT\_NAME ++Read-only. Name of the current deployment slot. The name of the production slot is `Production`. ++|Key|Sample value| +||| +|WEBSITE_SLOT_NAME|`Production`| + ## WEBSITE\_TIME\_ZONE Allows you to set the timezone for your function app. |
azure-maps | How To Dataset Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md | + + Title: How to create a dataset using a GeoJson package +description: Learn how to create a dataset using a GeoJson package embedding the module's JavaScript libraries. ++ Last updated : 10/31/2021++++++# Create a dataset using a GeoJson package (Preview) ++Azure Maps Creator enables users to import their indoor map data in GeoJSON format with [Facility Ontology 2.0][Facility Ontology], which can then be used to create a [dataset][dataset-concept]. ++> [!NOTE] +> This article explains how to create a dataset from a GeoJSON package. For information on additional steps required to complete an indoor map, see [Next steps](#next-steps). ++## Prerequisites ++- Basic understanding of [Creator for indoor maps](creator-indoor-maps.md). +- Basic understanding of [Facility Ontology 2.0][Facility Ontology]. +- [Azure Maps account][Azure Maps account]. +- [Azure Maps Creator resource][Creator resource]. +- [Subscription key][Subscription key]. +- Zip package containing all required GeoJSON files. If you don't have GeoJSON + files, you can download the [Contoso building sample][Contoso building sample]. ++>[!IMPORTANT] +> +> - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). +> - In the URL examples in this article you will need to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. ++## Create dataset using the GeoJSON package ++For more information on the GeoJSON package, see the [Geojson zip package requirements](#geojson-zip-package-requirements) section. ++### Upload the GeoJSON package ++Use the [Data Upload API](/rest/api/maps/data-v2/upload) to upload the Drawing package to Azure Maps Creator account. ++The Data Upload API is a long running transaction that implements the pattern defined in [Creator Long-Running Operation API V2](creator-long-running-operation-v2.md). ++To upload the GeoJSON package: ++1. Execute the following HTTP POST request that uses the [Data Upload API](/rest/api/maps/data-v2/upload): ++ ```http + https://us.atlas.microsoft.com/mapData?api-version=2.0&dataFormat=zip&subscription-key={Your-Azure-Maps-Primary-Subscription-key} + ``` ++ 1. Set `Content-Type` in the **Header** to `application/zip`. ++1. Copy the value of the `Operation-Location` key in the response header. The `Operation-Location` key is also known as the `status URL` and is required to check the status of the upload, which is explained in the next section. ++### Check the GeoJSON package upload status ++To check the status of the GeoJSON package and retrieve its unique identifier (`udid`): ++1. Execute the following HTTP GET request that uses the status URL you copied as the last step in the previous section of this article. The request should look like the following URL: ++```http +https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Primary-Subscription-key} +``` ++1. Copy the value of the `Resource-Location` key in the response header, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the GeoJSON package resource. ++### Create a dataset +<!-- +A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API][Dataset Create 2022-09-01-preview]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset. +--> +A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new create dataset API. The create dataset API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset. ++> [!IMPORTANT] +> This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted Drawing package. ++To create a dataset: ++1. Enter the following URL to the dataset service. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status](#check-the-geojson-package-upload-status) section): ++<!--1. Enter the following URL to the [Dataset service][Dataset Create 2022-09-01-preview]. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status](#check-the-geojson-package-upload-status) section):--> ++ ```http + https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&udid={udid}&subscription-key={Your-Azure-Maps-Primary-Subscription-key} + ``` ++1. Copy the value of the `Operation-Location` key in the response header. The `Operation-Location` key is also known as the `status URL` and is required to check the status of the dataset creation process and to get the `datasetId`, which is required to create a tileset. ++### Check the dataset creation status ++To check the status of the dataset creation process and retrieve the `datasetId`: ++1. Enter the status URL you copied in [Create a dataset](#create-a-dataset). The request should look like the following URL: ++ ```http + https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Primary-Subscription-key} + ``` ++1. In the Header of the HTTP response, copy the value of the unique identifier contained in the `Resource-Location` key. ++ > `https://us.atlas.microsoft.com/datasets/**c9c15957-646c-13f2-611a-1ea7adc75174**?api-version=2022-09-01-preview` ++See [Next steps](#next-steps) for links to articles to help you complete your indoor map. ++## Add data to an existing dataset ++<!-- +Data can be added to an existing dataset by providing the `datasetId` parameter to the [dataset create API][Dataset Create 2022-09-01-preview] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted. +--> ++Data can be added to an existing dataset by providing the `datasetId` parameter to the create dataset API along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted. ++One thing to consider when adding to an existing dataset is how the feature IDs are created. If a dataset is created from a converted drawing package, the feature IDs are generated automatically. When a dataset is created from a GeoJSON package, feature IDs must be provided in the GeoJSON file. When appending to an existing dataset, the original dataset drives the way feature IDs are created. If the original dataset was created using a `udid`, it uses the IDs from the GeoJSON, and will continue to do so with all GeoJSON packages appended to that dataset in the future. If the dataset was created using a `conversionId`, IDs will be internally generated, and will continue to be internally generated with all GeoJSON packages appended to that dataset in the future. ++### Add to dataset created from a GeoJSON source ++If your original dataset was created from a GoeJSON source and you wish to add another facility created from a drawing package, you can append it to your existing dataset by referencing its `conversionId`, as demonstrated by this HTTP POST request: ++```http +https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversionId={conversionId}&outputOntology=facility-2.0&datasetId={datasetId} +``` ++| Identifier | Description | +|--|-| +| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a Drawing package][conversion]. | +| datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package). | ++<!--For more information, see [][].--> ++## Geojson zip package requirements ++The GeoJSON zip package consists of one or more [RFC 7946][RFC 7946] compliant GeoJSON files, one for each feature class, all in the root directory (subdirectories aren't supported), compressed with standard Zip compression and named using the `.ZIP` extension. ++Each feature class file must match its definition in the [Facility ontology 2.0][Facility ontology] and each feature must have a globally unique identifier. ++Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) and underscore (_) characters. ++> [!TIP] +> If you want to be certain you have a globally unique identifier (GUID), consider creating it by running a GUID generating tool such as the Guidgen.exe command line program (Available with [Visual Studio][Visual Studio]). Guidgen.exe never produces the same number twice, no matter how many times it is run or how many different machines it runs on. ++### Facility ontology 2.0 validations in the Dataset ++[Facility ontology][Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks as well as geometry and attribute validations. These validations are described in more detail below. ++- The maximum number of features that can be imported into a dataset at a time is 150,000. +- The facility area can be between 4 and 4,000 Sq Km. +- The top level element is [facility][facility], which defines each building in the file *facility.geojson*. +- Each facility has one or more levels, which are defined in the file *levels.goejson*. + - Each level must be inside the facility. +- Each [level][level] contain [units][unit], [structures][structure], [verticalPenetrations][verticalPenetration] and [openings][opening]. All of the items defined in the level must be fully contained within the Level geometry. + - `unit` can consist of an array of items such as hallways, offices and courtyards, which are defined by [area][areaElement], [line][lineElement] or [point][pointElement] elements. Units are defined in the file *unit.goejson*. + - All `unit` elements must be fully contained within their level and intersect with their children. + - `structure` defines physical, non-overlapping areas that can't be navigated through, such as a wall. Structures are defined in the file *structure.goejson*. + - `verticalPenetration` represents a method of navigating vertically between levels, such as stairs and elevators and are defined in the file *verticalPenetration.geojson*. + - verticalPenetrations can't intersect with other verticalPenetrations on the same level. + - `openings` define traversable boundaries between two units, or a `unit` and `verticalPenetration` and are defined in the file *opening.geojson*. + - Openings can't intersect with other openings on the same level. + - Every `opening` must be associated with at least one `verticalPenetration` or `unit`. ++## Next steps ++> [!div class="nextstepaction"] +> [Create a tileset](tutorial-creator-indoor-maps.md#create-a-tileset) ++> [!div class="nextstepaction"] +> [Query datasets with WFS API](tutorial-creator-wfs.md) ++> [!div class="nextstepaction"] +> [Create a feature stateset](tutorial-creator-feature-stateset.md) ++[Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples +[unit]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit +[structure]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure +[level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level +[facility]: creator-facility-ontology.md?pivots=facility-ontology-v2#facility +[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration +[opening]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening +[areaElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#areaelement +[lineElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement +[pointElement]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement ++[conversion]: tutorial-creator-indoor-maps.md#convert-a-drawing-package +[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[Creator resource]: how-to-manage-creator.md +[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account +[Facility Ontology]: creator-facility-ontology.md?pivots=facility-ontology-v2 +[RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html +[dataset-concept]: creator-indoor-maps.md#datasets +<!--[Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create--> +[Visual Studio]: https://visualstudio.microsoft.com/downloads/ |
azure-maps | Tutorial Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md | In the next tutorials in the Creator series you'll learn to: > * Create a feature stateset that can be used to set the states of features in your dataset. > * Update the state of a given map feature. +> [!TIP] +> You can also create a dataset from a GeoJSON package. For more information, see [Create a dataset using a GeoJson package (Preview)](how-to-dataset-geojson.md). + ## Prerequisites 1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account). |
azure-monitor | Diagnostics Extension Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md | The Log Analytics agent in Azure Monitor can also be used to collect monitoring The key differences to consider are: -- Azure Diagnostics extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.-- Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only), and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).-- The Log Analytics agent is required for [solutions](../monitor-reference.md#insights-and-curated-visualizations), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml).+- Azure Diagnostics Extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises. +- Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only) and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md). +- The Log Analytics agent is required for retired [solutions](../insights/solutions.md), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml). ## Costs |
azure-monitor | Log Analytics Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md | Use the Log Analytics agent if you need to: * Use [VM insights](../vm/vminsights-overview.md), which allows you to monitor your machines at scale and monitor their processes and dependencies on other resources and external processes. * Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md). * Use [Azure Automation Update Management](../../automation/update-management/overview.md), [Azure Automation State Configuration](../../automation/automation-dsc-overview.md), or [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md) to deliver comprehensive management of your Azure and non-Azure machines.-* Use different [solutions](../monitor-reference.md#insights-and-curated-visualizations) to monitor a particular service or application. +* Use different [solutions](../insights/solutions.md) to monitor a particular service or application. Limitations of the Log Analytics agent: |
azure-monitor | Itsmc Connections Cherwell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-cherwell.md | - Title: Connect Cherwell with IT Service Management Connector -description: This article provides information about how to Cherwell with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. - Previously updated : 2/23/2022----# Connect Cherwell with IT Service Management Connector --This article provides information about how to configure the connection between your Cherwell instance and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. --> [!NOTE] -> As of 1-Oct-2020 Cherwell ITSM integration with Azure Alert will no longer be enabled for new customers. New ITSM Connections will not be supported. -> Existing ITSM connections will be supported. --The following sections provide details about how to connect your Cherwell product to ITSMC in Azure. --## Prerequisites --Ensure the following prerequisites are met: --- ITSMC installed. More information: [Adding the IT Service Management Connector Solution](./itsmc-definition.md#install-it-service-management-connector).-- Client ID generated. More information: [Generate client ID for Cherwell](#generate-client-id-for-cherwell).-- User role: Administrator.--## Connection procedure --Use the following procedure to create a Cherwell connection: --1. In Azure portal, go to **All Resources** and look for **ServiceDesk(YourWorkspaceName)** --2. Under **WORKSPACE DATA SOURCES** click **ITSM Connections**. -  --3. At the top of the right pane, click **Add**. --4. Provide the information as described in the following table, and click **OK** to create the connection. --> [!NOTE] -> All these parameters are mandatory. --| **Field** | **Description** | -| | | -| **Connection Name** | Type a name for the Cherwell instance that you want to connect to ITSMC. You use this name later when you configure work items in this ITSM/ view detailed log analytics. | -| **Partner type** | Select **Cherwell.** | -| **Username** | Type the Cherwell user name that can connect to ITSMC. | -| **Password** | Type the password associated with this user name. **Note:** User name and password are used for generating authentication tokens only, and are not stored anywhere within the ITSMC service.| -| **Server URL** | Type the URL of your Cherwell instance that you want to connect to ITSMC. | -| **Client ID** | Type the client ID for authenticating this connection, which you generated in your Cherwell instance. | -| **Data Sync Scope** | Select the Cherwell work items that you want to sync through ITSMC. These work items are imported into log analytics. **Options:** Incidents, Change Requests. | -| **Sync Data** | Type the number of past days that you want the data from. **Maximum limit**: 120 days. | -| **Create new configuration item in ITSM solution** | Select this option if you want to create the configuration items in the ITSM product. When selected, ITSMC creates the affected CIs as configuration items (in case of non-existing CIs) in the supported ITSM system. **Default**: disabled. | -- --**When successfully connected, and synced**: --- Selected work items from this Cherwell instance are imported into Azure **Log Analytics.** You can view the summary of these work items on the IT Service Management Connector tile.--- You can create incidents from Log Analytics alerts or from log records, or from Azure alerts in this Cherwell instance.--Learn more: [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts). --### Generate client ID for Cherwell --To generate the client ID/key for Cherwell, use the following procedure: --1. Log in to your Cherwell instance as admin. -2. Click **Security** > **Edit REST API client settings**. -3. Select **Create new client** > **client secret**. --  --## Next steps --* [ITSM Connector Overview](itsmc-overview.md) -* [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts) -* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md) |
azure-monitor | Itsmc Connections Provance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-provance.md | - Title: Connect Provance with IT Service Management Connector -description: This article provides information about how to Provance with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. - Previously updated : 2/23/2022-----# Connect Provance with IT Service Management Connector --This article provides information about how to configure the connection between your Provance instance and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. --> [!NOTE] -> As of 1-Oct-2020 Provance ITSM integration with Azure Alert will no longer be enabled for new customers. New ITSM Connections will not be supported. -> Existing ITSM connections will be supported. --The following sections provide details about how to connect your Provance product to ITSMC in Azure. --## Prerequisites --Ensure the following prerequisites are met: --- ITSMC installed. More information: [Adding the IT Service Management Connector Solution](./itsmc-definition.md#install-it-service-management-connector).-- Provance App should be registered with Azure AD - and client ID is made available. For detailed information, see [how to configure active directory authentication](../../app-service/configure-authentication-provider-aad.md).--- User role: Administrator.--## Connection procedure --Use the following procedure to create a Provance connection: --1. In Azure portal, go to **All Resources** and look for **ServiceDesk(YourWorkspaceName)** --2. Under **WORKSPACE DATA SOURCES** click **ITSM Connections**. -  --3. At the top of the right pane, click **Add**. --4. Provide the information as described in the following table, and click **OK** to create the connection. --> [!NOTE] -> All these parameters are mandatory. --| **Field** | **Description** | -| | | -| **Connection Name** | Type a name for the Provance instance that you want to connect with ITSMC. You use this name later when you configure work items in this ITSM/ view detailed log analytics. | -| **Partner type** | Select **Provance**. | -| **Username** | Type the user name that can connect to ITSMC. | -| **Password** | Type the password associated with this user name. **Note:** User name and password are used for generating authentication tokens only, and are not stored anywhere within the ITSMC service.| -| **Server URL** | Type the URL of your Provance instance that you want to connect to ITSMC. | -| **Client ID** | Type the client ID for authenticating this connection, which you generated in your Provance instance. More information on client ID, see [how to configure active directory authentication](../../app-service/configure-authentication-provider-aad.md). | -| **Data Sync Scope** | Select the Provance work items that you want to sync to Azure Log Analytics, through ITSMC. These work items are imported into log analytics. **Options:** Incidents, Change Requests.| -| **Sync Data** | Type the number of past days that you want the data from. **Maximum limit**: 120 days. | -| **Create new configuration item in ITSM solution** | Select this option if you want to create the configuration items in the ITSM product. When selected, ITSMC creates the affected CIs as configuration items (in case of non-existing CIs) in the supported ITSM system. **Default**: disabled.| -- --**When successfully connected, and synced**: --- Selected work items from this Provance instance are imported into Azure **Log Analytics.** You can view the summary of these work items on the IT Service Management Connector tile.--- You can create incidents from Log Analytics alerts or from log records, or from Azure alerts in this Provance instance.--## Next steps --* [ITSM Connector Overview](itsmc-overview.md) -* [Create ITSM work items from Azure alerts](./itsmc-definition.md#create-itsm-work-items-from-azure-alerts) -* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md) |
azure-monitor | Itsmc Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections.md | - Title: IT Service Management Connector in Azure Monitor -description: This article provides information about how to connect your ITSM products or services with the IT Service Management Connector (ITSMC) in Azure Monitor to centrally monitor and manage the ITSM work items. - Previously updated : 2/23/2022----# Connect ITSM products/services with IT Service Management Connector -This article provides information about how to configure the connection between your ITSM product or service and the IT Service Management Connector (ITSMC) in Log Analytics to centrally manage your work items. For more information about ITSMC, see [Overview](./itsmc-overview.md). --To set up your ITSM environment: -1. Connect to your ITSM. -- - For ServiceNow ITSM, see [the ServiceNow connection instructions](./itsmc-connections-servicenow.md). - - For SCSM, see [the System Center Service Manager connection instructions](/azure/azure-monitor/alerts/itsmc-connections). -- >[!NOTE] - > As of March 1, 2022, System Center ITSM integrations with Azure alerts is no longer enabled for new customers. New System Center ITSM Connections are not supported. - > Existing ITSM connections are supported. -2. (Optional) Set up the IP Ranges. In order to list the ITSM IP addresses in order to allow ITSM connections from partners ITSM tools, we recommend the to list the whole public IP range of Azure region where their LogAnalytics workspace belongs. [details here](https://www.microsoft.com/en-us/download/details.aspx?id=56519) -For regions EUS/WEU/EUS2/WUS2/US South Central the customer can list ActionGroup network tag only. --## Next steps --* [Troubleshooting problems in ITSM Connector](./itsmc-resync-servicenow.md) |
azure-monitor | Api Filtering Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md | ASP.NET **Core/Worker service apps** > [!NOTE] > Adding a processor by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK. -For apps written by using [ASP.NET Core](asp-net-core.md#adding-telemetry-processors) or [WorkerService](worker-service.md#adding-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class. +For apps written by using [ASP.NET Core](asp-net-core.md#adding-telemetry-processors) or [WorkerService](worker-service.md#add-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class. ```csharp public void ConfigureServices(IServiceCollection services) ASP.NET **Core/Worker service apps: Load your initializer** > [!NOTE] > Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK. -For apps written using [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) or [WorkerService](worker-service.md#adding-telemetryinitializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method. +For apps written using [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) or [WorkerService](worker-service.md#add-telemetry-initializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method. ```csharp using Microsoft.ApplicationInsights.Extensibility; |
azure-monitor | Azure Web Apps Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md | Title: Monitor Azure app services performance .NET Core | Microsoft Docs -description: Application performance monitoring for Azure app services using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. + Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs +description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. Last updated 08/05/2021 ms.devlang: csharp-# Application Monitoring for Azure App Service and ASP.NET Core +# Application monitoring for Azure App Service and ASP.NET Core -Enabling monitoring on your ASP.NET Core based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments. +Enabling monitoring on your ASP.NET Core-based web applications running on [Azure App Service](../../app-service/index.yml) is now easier than ever. Previously, you needed to manually instrument your app. Now, the latest extension/agent is built into the App Service image by default. This article walks you through enabling Azure Monitor Application Insights monitoring. It also provides preliminary guidance for automating the process for large-scale deployments. [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] For a complete list of supported auto-instrumentation scenarios, see [Supported # [Windows](#tab/Windows) > [!IMPORTANT]-> Only .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) is supported for auto-instrumentation on Windows. +> Only .NET Core [Long Term Support](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) is supported for auto-instrumentation on Windows. -> [!NOTE] -> Auto-instrumentation used to be known as "codeless attach" before October 2021. +[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is *not supported*. Use [manual instrumentation](./asp-net-core.md) via code instead. -[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead. +> [!NOTE] +> Auto-instrumentation used to be known as "codeless attach" before October 2021. -See the [enable monitoring section](#enable-monitoring) below to begin setting up Application Insights with your App Service resource. +See the following [Enable monitoring](#enable-monitoring) section to begin setting up Application Insights with your App Service resource. # [Linux](#tab/Linux) > [!IMPORTANT] > Only ASP.NET Core 6.0 is supported for auto-instrumentation on Linux. -> [!NOTE] -> Linux auto-instrumentation App Services portal enablement is in Public Preview. These preview versions are provided without a service level agreement. Certain features might not be supported or might have constrained capabilities. +[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is *not supported*. Use [manual instrumentation](./asp-net-core.md) via code instead. -[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead. +> [!NOTE] +> Linux auto-instrumentation App Service portal enablement is in public preview. These preview versions are provided without a service level agreement. Certain features might not be supported or might have constrained capabilities. -See the [enable monitoring section](#enable-monitoring) below to begin setting up Application Insights with your App Service resource. +See the following [Enable monitoring](#enable-monitoring) section to begin setting up Application Insights with your App Service resource. - -### Enable monitoring +### Enable monitoring -1. **Select Application Insights** in the Azure control panel for your app service, then select **Enable**. +1. Select **Application Insights** in the left pane for your app service. Then select **Enable**. - :::image type="content"source="./media/azure-web-apps/enable.png" alt-text=" Screenshot of Application Insights tab with enable selected."::: + :::image type="content"source="./media/azure-web-apps/enable.png" alt-text=" Screenshot that shows the Application Insights tab with Enable selected."::: -2. Choose to create a new resource, or select an existing Application Insights resource for this application. +1. Create a new resource or select an existing Application Insights resource for this application. > [!NOTE]- > When you click **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**. -- :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot of Change your resource dropdown."::: + > When you select **OK** to create a new resource, you're prompted to **Apply monitoring settings**. Selecting **Continue** links your new Application Insights resource to your app service. Your app service then restarts. -2. After specifying which resource to use, you can choose how you want Application Insights to collect data per platform for your application. ASP.NET Core offers **Recommended collection** or **Disabled**. + :::image type="content"source="./media/azure-web-apps/change-resource.png" alt-text="Screenshot that shows the Change your resource dropdown."::: - :::image type="content"source="./media/azure-web-apps-net-core/instrument-net-core.png" alt-text=" Screenshot of instrument your application section."::: +1. After you specify which resource to use, you can choose how you want Application Insights to collect data per platform for your application. ASP.NET Core collection options are **Recommended** or **Disabled**. + :::image type="content"source="./media/azure-web-apps-net-core/instrument-net-core.png" alt-text=" Screenshot that shows instrumenting your application section."::: ## Enable client-side monitoring -Client-side monitoring is **enabled by default** for ASP.NET Core apps with **Recommended collection**, regardless of whether the app setting 'APPINSIGHTS_JAVASCRIPT_ENABLED' is present. --If for some reason you would like to disable client-side monitoring: +Client-side monitoring is enabled by default for ASP.NET Core apps with **Recommended** collection, regardless of whether the app setting `APPINSIGHTS_JAVASCRIPT_ENABLED` is present. -* **Settings** **>** **Configuration** - * Under Application settings, create a **new application setting**: +If you want to disable client-side monitoring: - name: `APPINSIGHTS_JAVASCRIPT_ENABLED` +1. Select **Settings** > **Configuration**. +1. Under **Application settings**, create a **New application setting** with the following information: - Value: `false` -- * **Save** the settings and **Restart** your app. + - **Name**: `APPINSIGHTS_JAVASCRIPT_ENABLED` + - **Value**: `false` +1. **Save** the settings. Restart your app. ## Automate monitoring -To enable telemetry collection with Application Insights, only the Application settings need to be set: -+To enable telemetry collection with Application Insights, only the application settings must be set. ### Application settings definitions To enable telemetry collection with Application Insights, only the Application s |--|:|-:| |ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` for Windows or `~3` for Linux | |XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled to ensure optimal performance. | `disabled` or `recommended`. |-|XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with Application Insights SDK. Loads the extension side-by-side with the SDK and uses it to send telemetry (disables the Application Insights SDK). |`1`| -+|XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with the Application Insights SDK. Loads the extension side by side with the SDK and uses it to send telemetry. (Disables the Application Insights SDK.) |`1`| [!INCLUDE [azure-web-apps-arm-automation](../../../includes/azure-monitor-app-insights-azure-web-apps-arm-automation.md)] +## Upgrade monitoring extension/agent - .NET -## Upgrade monitoring extension/agent - .NET +To upgrade the monitoring extension/agent, follow the steps in the next sections. ### Upgrade from versions 2.8.9 and up Upgrading from version 2.8.9 happens automatically, without any extra actions. T To check which version of the extension you're running, go to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`. ### Upgrade from versions 1.0.0 - 2.6.5 -Starting with version 2.8.9 the pre-installed site extension is used. If you're using an earlier version, you can update via one of two ways: --* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring). (Even if you have the Application Insights extension for Azure App Service installed, the UI shows only **Enable** button. Behind the scenes, the old private site extension will be removed.) +Starting with version 2.8.9, the preinstalled site extension is used. If you're using an earlier version, you can update via one of two ways: +* [Upgrade by enabling via the portal](#enable-auto-instrumentation-monitoring): Even if you have the Application Insights extension for App Service installed, the UI shows only the **Enable** button. Behind the scenes, the old private site extension will be removed. * [Upgrade through PowerShell](#enable-through-powershell): - 1. Set the application settings to enable the pre-installed site extension ApplicationInsightsAgent. See [Enable through PowerShell](#enable-through-powershell). - 2. Manually remove the private site extension named Application Insights extension for Azure App Service. + 1. Set the application settings to enable the preinstalled site extension `ApplicationInsightsAgent`. For more information, see [Enable through PowerShell](#enable-through-powershell). + 1. Manually remove the private site extension named **Application Insights extension for Azure App Service**. -If the upgrade is done from a version prior to 2.5.1, check that the ApplicationInsigths dlls are removed from the application bin folder [see troubleshooting steps](#troubleshooting). +If the upgrade is done from a version prior to 2.5.1, check that the `ApplicationInsights` DLLs are removed from the application bin folder. For more information, see [Troubleshooting steps](#troubleshooting). ## Troubleshooting > [!NOTE]-> When you create a web app with the `ASP.NET Core` runtimes in Azure App Services it deploys a single static HTML page as a starter website. It is **not** recommended to troubleshoot an issue with default template. Deploy an application before troubleshooting an issue. +> When you create a web app with the `ASP.NET Core` runtimes in App Service, it deploys a single static HTML page as a starter website. We *do not* recommend that you troubleshoot an issue with the default template. Deploy an application before you troubleshoot an issue. -Below is our step-by-step troubleshooting guide for extension/agent based monitoring for ASP.NET Core based applications running on Azure App Services. +What follows is our step-by-step troubleshooting guide for extension/agent-based monitoring for ASP.NET Core-based applications running on App Service. # [Windows](#tab/windows) -1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2". -2. Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`. +1. Check that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~2`. +1. Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`. - :::image type="content"source="./media/azure-web-apps/app-insights-sdk-status.png" alt-text="Screenshot of the link above results page."border ="false"::: + :::image type="content"source="./media/azure-web-apps/app-insights-sdk-status.png" alt-text="Screenshot that shows the link above the results page."border ="false"::: - - Confirm that the `Application Insights Extension Status` is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.` + - Confirm that **Application Insights Extension Status** is `Pre-Installed Site Extension, version 2.8.x.xxxx, is running.` - If it isn't running, follow the [enable Application Insights monitoring instructions](#enable-auto-instrumentation-monitoring). + If it isn't running, follow the instructions in the section [Enable Application Insights monitoring](#enable-auto-instrumentation-monitoring). - - Confirm that the status source exists and looks like: `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json` + - Confirm that the status source exists and looks like `Status source D:\home\LogFiles\ApplicationInsights\status\status_RD0003FF0317B6_4248_1.json`. - If a similar value isn't present, it means the application isn't currently running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available. + If a similar value isn't present, it means the application isn't currently running or isn't supported. To ensure that the application is running, try manually visiting the application URL/application endpoints, which will allow the runtime information to become available. - - Confirm that `IKeyExists` is `true`. If it's `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings. + - Confirm that **IKeyExists** is `True`. If it's `False`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings. - - If your application refers to any Application Insights packages, enabling the App Service integration may not take effect and the data may not appear in Application Insights. An example would be if you've previously instrumented, or attempted to instrument, your app with the [ASP.NET Core SDK](./asp-net-core.md). To fix the issue, in portal turn on "Interop with Application Insights SDK" and you'll start seeing the data in Application Insights. - - + - If your application refers to any Application Insights packages, enabling the App Service integration might not take effect and the data might not appear in Application Insights. An example would be if you've previously instrumented, or attempted to instrument, your app with the [ASP.NET Core SDK](./asp-net-core.md). To fix the issue, in the portal, turn on **Interop with Application Insights SDK**. You'll start seeing the data in Application Insights. + > [!IMPORTANT]- > This functionality is in preview + > This functionality is in preview. - :::image type="content"source="./media/azure-web-apps-net-core/interop.png" alt-text=" Screenshot of interop setting enabled."::: + :::image type="content"source="./media/azure-web-apps-net-core/interop.png" alt-text=" Screenshot that shows the interop setting enabled."::: - The data is now going to be sent using codeless approach even if Application Insights SDK was originally used or attempted to be used. + The data will now be sent by using a codeless approach, even if the Application Insights SDK was originally used or attempted to be used. > [!IMPORTANT]- > If the application used Application Insights SDK to send any telemetry, such telemetry will be disabled ΓÇô in other words, custom telemetry - if any, such as for example any Track*() methods, and any custom settings, such as sampling, will be disabled. + > If the application used the Application Insights SDK to send any telemetry, the telemetry will be disabled. In other words, custom telemetry (for example, any `Track*()` methods) and custom settings (such as sampling) will be disabled. # [Linux](#tab/linux) -1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2" -1. Browse to https:// your site name .scm.azurewebsites.net/ApplicationInsights +1. Check that the `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of `~2`. +1. Browse to `https://your site name.scm.azurewebsites.net/ApplicationInsights`. 1. Within this site, confirm:- * The status source exists and looks like: `Status source /var/log/applicationinsights/status_abcde1234567_89_0.json` - * `Auto-Instrumentation enabled successfully`, is displayed. If a similar value isn't present, it means the application isn't running or isn't supported. To ensure that the application is running, try manually visiting the application url/application endpoints, which will allow the runtime information to become available. - * `IKeyExists` is `true`. If it's `false`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings. + * The status source exists and looks like `Status source /var/log/applicationinsights/status_abcde1234567_89_0.json`. + * The value `Auto-Instrumentation enabled successfully` is displayed. If a similar value isn't present, it means the application isn't running or isn't supported. To ensure that the application is running, try manually visiting the application URL/application endpoints, which will allow the runtime information to become available. + * **IKeyExists** is `True`. If it's `False`, add `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING` with your ikey GUID to your application settings. + :::image type="content" source="media/azure-web-apps-net-core/auto-instrumentation-status.png" alt-text="Screenshot that shows the auto-instrumentation status webpage." lightbox="media/azure-web-apps-net-core/auto-instrumentation-status.png"::: - ### Default website deployed with web apps doesn't support automatic client-side monitoring -When you create a web app with the `ASP.NET Core` runtimes in Azure App Services, it deploys a single static HTML page as a starter website. The static webpage also loads an ASP.NET managed web part in IIS. This behavior allows for testing codeless server-side monitoring, but doesn't support automatic client-side monitoring. +When you create a web app with the ASP.NET Core runtimes in App Service, it deploys a single static HTML page as a starter website. The static webpage also loads an ASP.NET-managed web part in IIS. This behavior allows for testing codeless server-side monitoring but doesn't support automatic client-side monitoring. -If you wish to test out codeless server and client-side monitoring for ASP.NET Core in an Azure App Services web app, we recommend following the official guides for [creating a ASP.NET Core web app](../../app-service/quickstart-dotnetcore.md). Then use the instructions in the current article to enable monitoring. +If you want to test out codeless server and client-side monitoring for ASP.NET Core in an App Service web app, we recommend that you follow the official guides for [creating an ASP.NET Core web app](../../app-service/quickstart-dotnetcore.md). Then use the instructions in the current article to enable monitoring. [!INCLUDE [azure-web-apps-troubleshoot](../../../includes/azure-monitor-app-insights-azure-web-apps-troubleshoot.md)] If you wish to test out codeless server and client-side monitoring for ASP.NET C ### PHP and WordPress aren't supported -PHP and WordPress sites aren't supported. There's currently no officially supported SDK/agent for server-side monitoring of these workloads. However, manually instrumenting client-side transactions on a PHP or WordPress site by adding the client-side JavaScript to your web pages can be accomplished by using the [JavaScript SDK](./javascript.md). +PHP and WordPress sites aren't supported. There's currently no officially supported SDK/agent for server-side monitoring of these workloads. To manually instrument client-side transactions on a PHP or WordPress site by adding the client-side JavaScript to your webpages, use the [JavaScript SDK](./javascript.md). -The table below provides a more detailed explanation of what these values mean, their underlying causes, and recommended fixes: +The following table provides an explanation of what these values mean, their underlying causes, and recommended fixes. -|Problem Value |Explanation |Fix | +|Problem value |Explanation |Fix | |- |-||-| `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the Application, and will back-off. It can be due to a reference to `Microsoft.ApplicationInsights.AspNetCore`, or `Microsoft.ApplicationInsights` | Remove the references. Some of these references are added by default from certain Visual Studio templates, and older versions of Visual Studio reference `Microsoft.ApplicationInsights`. | -|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of Microsoft.ApplicationsInsights dll in the app folder from a previous deployment. | Clean the app folder to ensure that these dlls are removed. Check both your local app's bin directory, and the wwwroot directory on the App Service. (To check the wwwroot directory of your App Service web app: Advanced Tools (Kudu) > Debug console > CMD > home\site\wwwroot). | -|`IKeyExists:false`|This value indicates that the instrumentation key isn't present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings. | +| `AppAlreadyInstrumented:true` | This value indicates that the extension detected that some aspect of the SDK is already present in the application and will back off. It can be because of a reference to `Microsoft.ApplicationInsights.AspNetCore` or `Microsoft.ApplicationInsights`. | Remove the references. Some of these references are added by default from certain Visual Studio templates. Older versions of Visual Studio reference `Microsoft.ApplicationInsights`. | +|`AppAlreadyInstrumented:true` | This value can also be caused by the presence of `Microsoft.ApplicationsInsights` DLL in the app folder from a previous deployment. | Clean the app folder to ensure that these DLLs are removed. Check both your local app's bin directory and the *wwwroot* directory on the App Service. (To check the wwwroot directory of your App Service web app, select **Advanced Tools (Kudu**) > **Debug console** > **CMD** > **home\site\wwwroot**). | +|`IKeyExists:false`|This value indicates that the instrumentation key isn't present in the app setting `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes include accidentally removing the values or forgetting to set the values in automation script. | Make sure the setting is present in the App Service application settings. | ## Release notes -For the latest updates and bug fixes, [consult the release notes](web-app-extension-release-notes.md). +For the latest updates and bug fixes, see the [Release notes](web-app-extension-release-notes.md). ## Next steps-* [Run the profiler on your live app](./profiler.md). ++* [Run the Profiler on your live app](./profiler.md). * [Monitor Azure Functions with Application Insights](monitor-functions.md). * [Enable Azure diagnostics](../agents/diagnostics-extension-to-application-insights.md) to be sent to Application Insights. * [Monitor service health metrics](../data-platform.md) to make sure your service is available and responsive. * [Receive alert notifications](../alerts/alerts-overview.md) whenever operational events happen or metrics cross a threshold.-* Use [Application Insights for JavaScript apps and web pages](javascript.md) to get client telemetry from the browsers that visit a web page. +* Use [Application Insights for JavaScript apps and webpages](javascript.md) to get client telemetry from the browsers that visit a webpage. * [Set up Availability web tests](monitor-web-app-availability.md) to be alerted if your site is down. |
azure-monitor | Data Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model.md | Title: Azure Application Insights Telemetry Data Model | Microsoft Docs -description: Application Insights data model overview + Title: Application Insights telemetry data model | Microsoft Docs +description: This article presents an overview of the Application Insights telemetry data model. documentationcenter: .net -[Azure Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal, so that you can analyze the performance and usage of your application. The telemetry model is standardized so that it is possible to create platform and language-independent monitoring. +[Application Insights](./app-insights-overview.md) sends telemetry from your web application to the Azure portal so that you can analyze the performance and usage of your application. The telemetry model is standardized, so it's possible to create platform and language-independent monitoring. -Data collected by Application Insights models this typical application execution pattern: +Data collected by Application Insights models this typical application execution pattern. - + -The following types of telemetry are used to monitor the execution of your app. The following three types are typically automatically collected by the Application Insights SDK from the web application framework: +The following types of telemetry are used to monitor the execution of your app. Three types are automatically collected by the Application Insights SDK from the web application framework: -* [**Request**](data-model-request-telemetry.md) - Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives. +* [Request](data-model-request-telemetry.md): Generated to log a request received by your app. For example, the Application Insights web SDK automatically generates a Request telemetry item for each HTTP request that your web app receives. - An **Operation** is the threads of execution that processes a request. You can also [write code](./api-custom-events-metrics.md#trackrequest) to monitor other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each operation has an ID. This ID that can be used to [group](./correlation.md) all telemetry generated while your app is processing the request. Each operation either succeeds or fails, and has a duration of time. -* [**Exception**](data-model-exception-telemetry.md) - Typically represents an exception that causes an operation to fail. -* [**Dependency**](data-model-dependency-telemetry.md) - Represents a call from your app to an external service or storage such as a REST API or SQL. In ASP.NET, dependency calls to SQL are defined by `System.Data`. Calls to HTTP endpoints are defined by `System.Net`. + An *operation* is made up of the threads of execution that process a request. You can also [write code](./api-custom-events-metrics.md#trackrequest) to monitor other types of operation, such as a "wake up" in a web job or function that periodically processes data. Each operation has an ID. The ID can be used to [group](./correlation.md) all telemetry generated while your app is processing the request. Each operation either succeeds or fails and has a duration of time. +* [Exception](data-model-exception-telemetry.md): Typically represents an exception that causes an operation to fail. +* [Dependency](data-model-dependency-telemetry.md): Represents a call from your app to an external service or storage, such as a REST API or SQL. In ASP.NET, dependency calls to SQL are defined by `System.Data`. Calls to HTTP endpoints are defined by `System.Net`. -Application Insights provides three additional data types for custom telemetry: +Application Insights provides three data types for custom telemetry: -* [Trace](data-model-trace-telemetry.md) - used either directly, or through an adapter to implement diagnostics logging using an instrumentation framework that is familiar to you, such as `Log4Net` or `System.Diagnostics`. -* [Event](data-model-event-telemetry.md) - typically used to capture user interaction with your service, to analyze usage patterns. -* [Metric](data-model-metric-telemetry.md) - used to report periodic scalar measurements. +* [Trace](data-model-trace-telemetry.md): Used either directly or through an adapter to implement diagnostics logging by using an instrumentation framework that's familiar to you, such as `Log4Net` or `System.Diagnostics`. +* [Event](data-model-event-telemetry.md): Typically used to capture user interaction with your service to analyze usage patterns. +* [Metric](data-model-metric-telemetry.md): Used to report periodic scalar measurements. -Every telemetry item can define the [context information](data-model-context.md) like application version or user session id. Context is a set of strongly typed fields that unblocks certain scenarios. When application version is properly initialized, Application Insights can detect new patterns in application behavior correlated with redeployment. Session id can be used to calculate the outage or an issue impact on users. Calculating distinct count of session id values for certain failed dependency, error trace or critical exception gives a good understanding of an impact. +Every telemetry item can define the [context information](data-model-context.md) like application version or user session ID. Context is a set of strongly typed fields that unblocks certain scenarios. When application version is properly initialized, Application Insights can detect new patterns in application behavior correlated with redeployment. -Application Insights telemetry model defines a way to [correlate](./correlation.md) telemetry to the operation of which itΓÇÖs a part. For example, a request can make a SQL Database calls and recorded diagnostics info. You can set the correlation context for those telemetry items that tie it back to the request telemetry. +You can use session ID to calculate an outage or an issue impact on users. Calculating the distinct count of session ID values for a specific failed dependency, error trace, or critical exception gives you a good understanding of an impact. ++The Application Insights telemetry model defines a way to [correlate](./correlation.md) telemetry to the operation of which it's a part. For example, a request can make a SQL Database call and record diagnostics information. You can set the correlation context for those telemetry items that tie it back to the request telemetry. ## Schema improvements -Application Insights data model is a simple and basic yet powerful way to model your application telemetry. We strive to keep the model simple and slim to support essential scenarios and allow to extend the schema for advanced use. +The Application Insights data model is a basic yet powerful way to model your application telemetry. We strive to keep the model simple and slim to support essential scenarios and allow the schema to be extended for advanced use. -[To report data model or schema problems and suggestions use our GitHub repository](https://github.com/microsoft/ApplicationInsights-dotnet/issues/new/choose). +To report data model or schema problems and suggestions, use our [GitHub repository](https://github.com/microsoft/ApplicationInsights-dotnet/issues/new/choose). ## Next steps -- [Write custom telemetry](./api-custom-events-metrics.md)+- [Write custom telemetry](./api-custom-events-metrics.md). - Learn how to [extend and filter telemetry](./api-filtering-sampling.md).-- Use [sampling](./sampling.md) to minimize amount of telemetry based on data model.+- Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model. - Check out [platforms](./platforms.md) supported by Application Insights.- |
azure-monitor | Eventcounters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md | To get a list of well known counters published by the .NET Runtime, see [Availab ## Customizing counters to be collected -The following example shows how to add/remove counters. This customization would be done in the `ConfigureServices` method of your application after Application Insights telemetry collection is enabled using either `AddApplicationInsightsTelemetry()` or `AddApplicationInsightsWorkerService()`. Following is an example code from an ASP.NET Core application. For other type of applications, refer to [this](worker-service.md#configuring-or-removing-default-telemetrymodules) document. +The following example shows how to add/remove counters. This customization would be done in the `ConfigureServices` method of your application after Application Insights telemetry collection is enabled using either `AddApplicationInsightsTelemetry()` or `AddApplicationInsightsWorkerService()`. Following is an example code from an ASP.NET Core application. For other type of applications, refer to [this](worker-service.md#configure-or-remove-default-telemetry-modules) document. ```csharp using Microsoft.ApplicationInsights.Extensibility.EventCounterCollector; |
azure-monitor | Export Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md | Title: Continuous export of telemetry from Application Insights | Microsoft Docs -description: Export diagnostic and usage data to storage in Microsoft Azure, and download it from there. +description: Export diagnostic and usage data to storage in Azure and download it from there. Last updated 10/24/2022 -Want to keep your telemetry for longer than the standard retention period? Or process it in some specialized way? Continuous Export is ideal for this purpose. The events you see in the Application Insights portal can be exported to storage in Microsoft Azure in JSON format. From there, you can download your data and write whatever code you need to process it. +Do you want to keep your telemetry for longer than the standard retention period? Or do you want to process it in some specialized way? Continuous export is ideal for this purpose. The events you see in the Application Insights portal can be exported to storage in Azure in JSON format. From there, you can download your data and write whatever code you need to process it. > [!IMPORTANT] > * On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.-> * When [migrating to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry). -> * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export)) +> * When you [migrate to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry). +> * Diagnostic settings export might increase costs. For more information, see [Diagnostic settings-based export](export-telemetry.md#diagnostic-settings-based-export). Before you set up continuous export, there are some alternatives you might want to consider: -* The Export button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet. +* The **Export** button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet. +* [Log Analytics](../logs/log-query-overview.md) provides a powerful query language for telemetry. It can also export results. +* If you're looking to [explore your data in Power BI](../logs/log-powerbi.md), you can do that without using continuous export if you've [migrated to a workspace-based resource](convert-classic-resource.md). +* The [Data Access REST API](https://dev.applicationinsights.io/) lets you access your telemetry programmatically. +* You can also access setup for [continuous export via PowerShell](/powershell/module/az.applicationinsights/new-azapplicationinsightscontinuousexport). -* [Analytics](../logs/log-query-overview.md) provides a powerful query language for telemetry. It can also export results. -* If you're looking to [explore your data in Power BI](../logs/log-powerbi.md), you can do that without using Continuous Export if you've [migrated to a workspace-based resource](convert-classic-resource.md). -* The [Data access REST API](https://dev.applicationinsights.io/) lets you access your telemetry programmatically. -* You can also access setup [continuous export via PowerShell](/powershell/module/az.applicationinsights/new-azapplicationinsightscontinuousexport). +After continuous export copies your data to storage, where it can stay as long as you like, it's still available in Application Insights for the usual [retention period](./data-retention-privacy.md). -After continuous export copies your data to storage, where it may stay as long as you like, it's still available in Application Insights for the usual [retention period](./data-retention-privacy.md). +## Supported regions -## Supported Regions --Continuous Export is supported in the following regions: +Continuous export is supported in the following regions: * Southeast Asia * Canada Central Continuous Export is supported in the following regions: * Japan West > [!NOTE]-> Continuous Export will continue to work for Applications in **East US** and **West Europe** if the export was configured before February 23, 2021. New Continuous Export rules cannot be configured on any application in **East US** or **West Europe**, regardless of when the application was created. --## Continuous Export advanced storage configuration +> Continuous export will continue to work for applications in East US and West Europe if the export was configured before February 23, 2021. New continuous export rules can't be configured on any application in East US or West Europe, no matter when the application was created. -Continuous Export **does not support** the following Azure storage features/configurations: +## Continuous export advanced storage configuration -* Use of [VNET/Azure Storage firewalls](../../storage/common/storage-network-security.md) with Azure Blob storage. +Continuous export *doesn't support* the following Azure Storage features or configurations: +* Use of [Azure Virtual Network/Azure Storage firewalls](../../storage/common/storage-network-security.md) with Azure Blob Storage. * [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md). -## <a name="setup"></a> Create a Continuous Export +## <a name="setup"></a> Create a continuous export > [!NOTE]-> An application cannot export more than 3TB of data per day. If more than 3TB per day is exported, the export will be disabled. To export without a limit use [diagnostic settings based export](#diagnostic-settings-based-export). +> An application can't export more than 3 TB of data per day. If more than 3 TB per day is exported, the export will be disabled. To export without a limit, use [diagnostic settings-based export](#diagnostic-settings-based-export). -1. In the Application Insights resource for your app under configure on the left, open Continuous Export and choose **Add**: +1. In the Application Insights resource for your app under **Configure** on the left, open **Continuous export** and select **Add**. -2. Choose the telemetry data types you want to export. +1. Choose the telemetry data types you want to export. -3. Create or select an [Azure storage account](../../storage/common/storage-introduction.md) where you want to store the data. For more information on storage pricing options, visit the [official pricing page](https://azure.microsoft.com/pricing/details/storage/). +1. Create or select an [Azure Storage account](../../storage/common/storage-introduction.md) where you want to store the data. For more information on storage pricing options, see the [Pricing page](https://azure.microsoft.com/pricing/details/storage/). - Select Add, Export Destination, Storage account, and then either create a new store or choose an existing store. + Select **Add** > **Export destination** > **Storage account**. Then either create a new store or choose an existing store. > [!Warning]- > By default, the storage location will be set to the same geographical region as your Application Insights resource. If you store in a different region, you may incur transfer charges. + > By default, the storage location will be set to the same geographical region as your Application Insights resource. If you store in a different region, you might incur transfer charges. -4. Create or select a container in the storage. +1. Create or select a container in the storage. > [!NOTE]-> Once you've created your export, newly ingested data will begin to flow to Azure Blob storage. Continuous export will only transmit new telemetry that is created/ingested after continuous export was enabled. Any data that existed prior to enabling continuous export will not be exported, and there is no supported way to retroactively export previously created data using continuous export. +> After you've created your export, newly ingested data will begin to flow to Azure Blob Storage. Continuous export only transmits new telemetry that's created or ingested after continuous export was enabled. Any data that existed prior to enabling continuous export won't be exported. There's no supported way to retroactively export previously created data by using continuous export. There can be a delay of about an hour before data appears in the storage. -Once the first export is complete, you'll find the following structure in your Azure Blob storage container: (This structure will vary depending on the data you're collecting.) +After the first export is finished, you'll find the following structure in your Blob Storage container. (This structure varies depending on the data you're collecting.) |Name | Description | |:-|:| | [Availability](export-data-model.md#availability) | Reports [availability web tests](./monitor-web-app-availability.md). |-| [Event](export-data-model.md#events) | Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent). +| [Event](export-data-model.md#events) | Custom events generated by [TrackEvent()](./api-custom-events-metrics.md#trackevent). | [Exceptions](export-data-model.md#exceptions) |Reports [exceptions](./asp-net-exceptions.md) in the server and in the browser. | [Messages](export-data-model.md#trace-messages) | Sent by [TrackTrace](./api-custom-events-metrics.md#tracktrace), and by the [logging adapters](./asp-net-trace-logs.md). | [Metrics](export-data-model.md#metrics) | Generated by metric API calls. | [PerformanceCounters](export-data-model.md) | Performance Counters collected by Application Insights. | [Requests](export-data-model.md#requests)| Sent by [TrackRequest](./api-custom-events-metrics.md#trackrequest). The standard modules use requests to report server response time, measured at the server.| -### To edit continuous export +### Edit continuous export -Select continuous export and select the storage account to edit. +Select **Continuous export** and select the storage account to edit. -### To stop continuous export +### Stop continuous export -To stop the export, select Disable. When you select Enable again, the export will restart with new data. You won't get the data that arrived in the portal while export was disabled. +To stop the export, select **Disable**. When you select **Enable** again, the export restarts with new data. You won't get the data that arrived in the portal while export was disabled. To stop the export permanently, delete it. Doing so doesn't delete your data from storage. ### Can't add or change an export?-* To add or change exports, you need Owner, Contributor, or Application Insights Contributor access rights. [Learn about roles][roles]. ++To add or change exports, you need Owner, Contributor, or Application Insights Contributor access rights. [Learn about roles][roles]. ## <a name="analyze"></a> What events do you get? The exported data is the raw telemetry we receive from your application with added location data from the client IP address. Other calculated metrics aren't included. For example, we don't export average C The data also includes the results of any [availability web tests](./monitor-web-app-availability.md) that you have set up. > [!NOTE]-> **Sampling.** If your application sends a lot of data, the sampling feature may operate and send only a fraction of the generated telemetry. [Learn more about sampling.](./sampling.md) -> +> If your application sends a lot of data, the sampling feature might operate and send only a fraction of the generated telemetry. [Learn more about sampling.](./sampling.md) > ## <a name="get"></a> Inspect the data-You can inspect the storage directly in the portal. Select home in the leftmost menu, at the top where it says "Azure services" select **Storage accounts**, select the storage account name, on the overview page select **Blobs** under services, and finally select the container name. +You can inspect the storage directly in the portal. Select **Home** on the leftmost menu. At the top where it says **Azure services**, select **Storage accounts**. Select the storage account name, and on the **Overview** page select **Services** > **Blobs**. Finally, select the container name. -To inspect Azure storage in Visual Studio, open **View**, **Cloud Explorer**. (If you don't have that menu command, you need to install the Azure SDK: Open the **New Project** dialog, expand Visual C#/Cloud and choose **Get Microsoft Azure SDK for .NET**.) +To inspect Azure Storage in Visual Studio, select **View** > **Cloud Explorer**. If you don't have that menu command, you need to install the Azure SDK. Open the **New Project** dialog, expand **Visual C#/Cloud**, and select **Get Microsoft Azure SDK for .NET**. -When you open your blob store, you'll see a container with a set of blob files. The URI of each file derived from your Application Insights resource name, its instrumentation key, telemetry-type/date/time. (The resource name is all lowercase, and the instrumentation key omits dashes.) +When you open your blob store, you'll see a container with a set of blob files. You'll see the URI of each file derived from your Application Insights resource name, its instrumentation key, and telemetry type, date, and time. The resource name is all lowercase, and the instrumentation key omits dashes. - + [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] -The date and time are UTC and are when the telemetry was deposited in the store - not the time it was generated. So if you write code to download the data, it can move linearly through the data. +The date and time are UTC and are when the telemetry was deposited in the store, not the time it was generated. For this reason, if you write code to download the data, it can move linearly through the data. Here's the form of the path: Here's the form of the path: $"{applicationName}_{instrumentationKey}/{type}/{blobDeliveryTimeUtc:yyyy-MM-dd}/{ blobDeliveryTimeUtc:HH}/{blobId}_{blobCreationTimeUtc:yyyyMMdd_HHmmss}.blob" ``` -Where +Where: -* `blobCreationTimeUtc` is the time when blob was created in the internal staging storage -* `blobDeliveryTimeUtc` is the time when blob is copied to the export destination storage +* `blobCreationTimeUtc` is the time when the blob was created in the internal staging storage. +* `blobDeliveryTimeUtc` is the time when the blob is copied to the export destination storage. ## <a name="format"></a> Data format-* Each blob is a text file that contains multiple '\n'-separated rows. It contains the telemetry processed over a time period of roughly half a minute. -* Each row represents a telemetry data point such as a request or page view. -* Each row is an unformatted JSON document. If you want to view the rows, open the blob in Visual Studio and choose **Edit** > **Advanced** > **Format File**: -  +The data is formatted so that: ++* Each blob is a text file that contains multiple `\n`-separated rows. It contains the telemetry processed over a time period of roughly half a minute. +* Each row represents a telemetry data point, such as a request or page view. +* Each row is an unformatted JSON document. If you want to view the rows, open the blob in Visual Studio and select **Edit** > **Advanced** > **Format File**. ++  Time durations are in ticks, where 10 000 ticks = 1 ms. For example, these values show a time of 1 ms to send a request from the browser, 3 ms to receive it, and 1.8 s to process the page in the browser: Time durations are in ticks, where 10 000 ticks = 1 ms. For example, these value "clientProcess": {"value": 17970000.0} ``` -[Detailed data model reference for the property types and values.](export-data-model.md) +For a detailed data model reference for the property types and values, see [Application Insights export data model](export-data-model.md). -## Processing the data -On a small scale, you can write some code to pull apart your data, read it into a spreadsheet, and so on. For example: +## Process the data +On a small scale, you can write some code to pull apart your data and read it into a spreadsheet. For example: ```csharp private IEnumerable<T> DeserializeMany<T>(string folderName) private IEnumerable<T> DeserializeMany<T>(string folderName) } ``` -For a larger code sample, see [using a worker role][exportasa]. +For a larger code sample, see [Using a worker role][exportasa]. ## <a name="delete"></a>Delete your old data-You're responsible for managing your storage capacity and deleting the old data if necessary. +You're responsible for managing your storage capacity and deleting old data, if necessary. -## If you regenerate your storage key... +## Regenerate your storage key If you change the key to your storage, continuous export will stop working. You'll see a notification in your Azure account. -Open the Continuous Export tab and edit your export. Edit the Export Destination, but just leave the same storage selected. Select OK to confirm. +Select the **Continuous Export** tab and edit your export. Edit the**Export Destination** value, but leave the same storage selected. Select **OK** to confirm. -The continuous export will restart. +Continuous export will restart. ## Export samples +For export samples, see: + * [Export to SQL using Stream Analytics][exportasa] * [Stream Analytics sample 2](../../stream-analytics/app-insights-export-stream-analytics.md) -On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdinsight/) - Hadoop clusters in the cloud. HDInsight provides various technologies for managing and analyzing big data. You can use it to process data that has been exported from Application Insights. +On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdinsight/) Hadoop clusters in the cloud. HDInsight provides various technologies for managing and analyzing big data. You can use it to process data that's been exported from Application Insights. ## Q & A-* *But all I want is a one-time download of a chart.* - Yes, you can do that. At the top of the tab, select **Export Data**. -* *I set up an export, but there's no data in my store.* +This section provides answers to common questions. ++### Can I get a one-time download of a chart? ++You can do that. At the top of the tab, select **Export Data**. ++### I set up an export, but why is there no data in my store? - Did Application Insights receive any telemetry from your app since you set up the export? You'll only receive new data. -* *I tried to set up an export, but was denied access* +Did Application Insights receive any telemetry from your app since you set up the export? You'll only receive new data. - If the account is owned by your organization, you have to be a member of the owners or contributors groups. -* *Can I export straight to my own on-premises store?* +### I tried to set up an export, but why was I denied access? - No, sorry. Our export engine currently only works with Azure storage at this time. -* *Is there any limit to the amount of data you put in my store?* +If the account is owned by your organization, you have to be a member of the Owners or Contributors groups. - No. We'll keep pushing data in until you delete the export. We'll stop if we hit the outer limits for blob storage, but that's huge. It's up to you to control how much storage you use. -* *How many blobs should I see in the storage?* +### Can I export straight to my own on-premises store? - * For every data type you selected to export, a new blob is created every minute (if data is available). - * In addition, for applications with high traffic, extra partition units are allocated. In this case, each unit creates a blob every minute. -* *I regenerated the key to my storage or changed the name of the container, and now the export doesn't work.* +No. Our export engine currently only works with Azure Storage at this time. - Edit the export and open the export destination tab. Leave the same storage selected as before, and select OK to confirm. Export will restart. If the change was within the past few days, you won't lose data. -* *Can I pause the export?* +### Is there any limit to the amount of data you put in my store? - Yes. Select Disable. +No. We'll keep pushing data in until you delete the export. We'll stop if we hit the outer limits for Blob Storage, but that limit is huge. It's up to you to control how much storage you use. ++### How many blobs should I see in the storage? ++ * For every data type you selected to export, a new blob is created every minute, if data is available. + * For applications with high traffic, extra partition units are allocated. In this case, each unit creates a blob every minute. ++### I regenerated the key to my storage, or changed the name of the container, but why doesn't the export work? ++Edit the export and select the **Export destination** tab. Leave the same storage selected as before, and select **OK** to confirm. Export will restart. If the change was within the past few days, you won't lose data. ++### Can I pause the export? ++Yes. Select **Disable**. ## Code samples * [Stream Analytics sample](../../stream-analytics/app-insights-export-stream-analytics.md)-* [Export to SQL using Stream Analytics][exportasa] -* [Detailed data model reference for the property types and values.](export-data-model.md) +* [Export to SQL by using Stream Analytics][exportasa] +* [Detailed data model reference for property types and values](export-data-model.md) -## Diagnostic settings based export +## Diagnostic settings-based export -Diagnostic settings export is preferred because it provides extra features. +Diagnostic settings export is preferred because it provides extra features: > [!div class="checklist"]- > * Azure storage accounts with virtual networks, firewalls, and private links - > * Export to Event Hubs + > * Azure Storage accounts with virtual networks, firewalls, and private links. + > * Export to Azure Event Hubs. Diagnostic settings export further differs from continuous export in the following ways: * Updated schema. * Telemetry data is sent as it arrives instead of in batched uploads.+ > [!IMPORTANT]- > Additional costs may be incurred due to an increase in calls to the destination, such as a storage account. + > Extra costs might be incurred because of an increase in calls to the destination, such as a storage account. To migrate to diagnostic settings export: 1. Disable current continuous export.-2. [Migrate application to workspace-based](convert-classic-resource.md). -3. [Enable diagnostic settings export](create-workspace-resource.md#export-telemetry). Select **Diagnostic settings > add diagnostic setting** from within your Application Insights resource. +1. [Migrate application to workspace based](convert-classic-resource.md). +1. [Enable diagnostic settings export](create-workspace-resource.md#export-telemetry). Select **Diagnostic settings** > **Add diagnostic setting** from within your Application Insights resource. > [!CAUTION]-> If you want to store diagnostic logs in a Log Analytics workspace, there are two things to consider to avoid seeing duplicate data in Application Insights: +> If you want to store diagnostic logs in a Log Analytics workspace, there are two points to consider to avoid seeing duplicate data in Application Insights: +> > * The destination can't be the same Log Analytics workspace that your Application Insights resource is based on.-> * The Application Insights user can't have access to both workspaces. This can be done by setting the Log Analytics [Access control mode](../logs/log-analytics-workspace-overview.md#permissions) to **Requires workspace permissions** and ensuring through [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md) that the user only has access to the Log Analytics workspace the Application Insights resource is based on. -> -> These steps are necessary because Application Insights accesses telemetry across Application Insight resources (including Log Analytics workspaces) to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources containing the same data. +> * The Application Insights user can't have access to both workspaces. Set the Log Analytics [access control mode](../logs/log-analytics-workspace-overview.md#permissions) to **Requires workspace permissions**. Through [Azure role-based access control](./resources-roles-access-control.md), ensure the user only has access to the Log Analytics workspace the Application Insights resource is based on. +> +> These steps are necessary because Application Insights accesses telemetry across Application Insight resources, including Log Analytics workspaces, to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources that contain the same data. <!--Link references--> [exportasa]: ../../stream-analytics/app-insights-export-sql-stream-analytics.md-[roles]: ./resources-roles-access-control.md +[roles]: ./resources-roles-access-control.md |
azure-monitor | Ip Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md | Title: Azure Application Insights IP address collection | Microsoft Docs + Title: Application Insights IP address collection | Microsoft Docs description: Understand how Application Insights handles IP addresses and geolocation. Last updated 09/23/2020 This article explains how geolocation lookup and IP address handling work in App ## Default behavior -By default, IP addresses are temporarily collected but not stored in Application Insights. The basic process is as follows: +By default, IP addresses are temporarily collected but not stored in Application Insights. This process follows some basic steps. -When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup. Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field. +When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup. Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field. -Geolocation data can be removed in the following ways. +To remove geolocation data, see the following articles: * [Remove the client IP initializer](../app/configuration-with-applicationinsights-config.md) * [Use a custom initializer](../app/api-filtering-sampling.md) The telemetry types are: -* Browser telemetry: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address. -* Server telemetry: The Application Insights telemetry module temporarily collects the client IP address. The IP address isn't collected locally when the `X-Forwarded-For` header is set. When the incoming list of IP address has more than one item, the last IP address is used to populate geolocation fields. +* **Browser telemetry**: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address. +* **Server telemetry**: The Application Insights telemetry module temporarily collects the client IP address. The IP address isn't collected locally when the `X-Forwarded-For` header is set. When the incoming IP address list has more than one item, the last IP address is used to populate geolocation fields. -This behavior is by design to help avoid unnecessary collection of personal data and ip address location information. Whenever possible, we recommend avoiding the collection of personal data. +This behavior is by design to help avoid unnecessary collection of personal data and IP address location information. Whenever possible, we recommend avoiding the collection of personal data. > [!NOTE]-> Although the default is to not collect IP addresses, you can override this behavior. We recommend verifying that the collection doesn't break any compliance requirements or local regulations. +> Although the default is to not collect IP addresses, you can override this behavior. We recommend verifying that the collection doesn't break any compliance requirements or local regulations. >-> To learn more about handling personal data in Application Insights, consult the [guidance for personal data](../logs/personal-data-mgmt.md). +> To learn more about handling personal data in Application Insights, see [Guidance for personal data](../logs/personal-data-mgmt.md). -While not collecting ip addresses will also not collect city and other geolocation attributes are populated by our pipeline by using the IP address, you can also mask IP collection at the source. This can be done by either removing the client IP initializer [Configuration with Applications Insights Configuration](configuration-with-applicationinsights-config.md), or providing your own custom initializer. For more information, see [API Filtering example.](api-filtering-sampling.md). +When IP addresses aren't collected, city and other geolocation attributes populated by our pipeline by using the IP address also aren't collected. You can mask IP collection at the source. There are two ways to do it. You can: +* Remove the client IP initializer. For more information, see [Configuration with Applications Insights Configuration](configuration-with-applicationinsights-config.md). +* Provide your own custom initializer. For more information, see an [API filtering example](api-filtering-sampling.md). ## Storage of IP address data -To enable IP collection and storage, the `DisableIpMasking` property of the Application Insights component must be set to `true`. You can set this property through Azure Resource Manager templates or by calling the REST API. +To enable IP collection and storage, the `DisableIpMasking` property of the Application Insights component must be set to `true`. You can set this property through Azure Resource Manager templates (ARM templates) or by calling the REST API. -### Azure Resource Manager template +### ARM template ```json { To enable IP collection and storage, the `DisableIpMasking` property of the Appl ### Portal -If you only need to modify the behavior for a single Application Insights resource, use the Azure portal. +If you need to modify the behavior for only a single Application Insights resource, use the Azure portal. -1. Go your Application Insights resource, and then select **Automation** > **Export Template**. +1. Go to your Application Insights resource, and then select **Automation** > **Export template**. -2. Select **Deploy**. +1. Select **Deploy**. -  +  -3. Select **Edit Template**. +1. Select **Edit template**. -  +  > [!NOTE]- > If you experience the following error (as shown in the screenshot), you can resolve it: "The resource group is in a location that is not supported by one or more resources in the template. Please choose a different resource group." Temporarily select a different resource group from the dropdown list and then re-select your original resource group. + > If you experience the error shown in the preceding screenshot, you can resolve it. It states: "The resource group is in a location that is not supported by one or more resources in the template. Please choose a different resource group." Temporarily select a different resource group from the dropdown list and then re-select your original resource group. -4. In the JSON template locate `properties` inside `resources`, add a comma to the last JSON field, and then add the following new line: `"DisableIpMasking": true`. Then select **Save**. +1. In the JSON template, locate `properties` inside `resources`. Add a comma to the last JSON field, and then add the following new line: `"DisableIpMasking": true`. Then select **Save**.  -5. Select **Review + create** > **Create**. +1. Select **Review + create** > **Create**. > [!NOTE] > If you see "Your deployment failed," look through your deployment details for the one with the type `microsoft.insights/components` and check the status. If that one succeeds, the changes made to `DisableIpMasking` were deployed. -6. After the deployment is complete, new telemetry data will be recorded. +1. After the deployment is complete, new telemetry data will be recorded. If you select and edit the template again, you'll see only the default template without the newly added property. If you aren't seeing IP address data and want to confirm that `"DisableIpMasking": true` is set, run the following PowerShell commands: If you only need to modify the behavior for a single Application Insights resour $AppInsights.Properties ``` - A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before deploying the new property with Azure Resource Manager, the property won't exist. + A list of properties is returned as a result. One of the properties should read `DisableIpMasking: true`. If you run the PowerShell commands before you deploy the new property with Azure Resource Manager, the property won't exist. ### REST API -The [REST API](/rest/api/azure/) payload to make the same modifications is as follows: +The following [REST API](/rest/api/azure/) payload makes the same modifications: ``` PATCH https://management.azure.com/subscriptions/<sub-id>/resourceGroups/<rg-name>/providers/microsoft.insights/components/<resource-name>?api-version=2018-05-01-preview HTTP/1.1 Content-Length: 54 ## Telemetry initializer -If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field. +If you need a more flexible alternative than `DisableIpMasking`, you can use a [telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) to copy all or part of the IP address to a custom field. # [.NET](#tab/net) You can create your telemetry initializer the same way for ASP.NET Core as for A services.AddSingleton<ITelemetryInitializer, CloneIPAddress>(); } ```+ # [Node.js](#tab/nodejs) ### Node.js appInsights.defaultClient.addTelemetryProcessor((envelope) => { ### Client-side JavaScript -Unlike the server-side SDKs, the client-side JavaScript SDK doesn't calculate an IP address. By default, IP address calculation for client-side telemetry occurs at the ingestion endpoint in Azure. --If you want to calculate the IP address directly on the client side, you need to add your own custom logic and use the result to set the `ai.location.ip` tag. When `ai.location.ip` is set, the ingestion endpoint doesn't perform IP address calculation, and the provided IP address is used for the geolocation lookup. In this scenario, the IP address is still zeroed out by default. +Unlike the server-side SDKs, the client-side JavaScript SDK doesn't calculate an IP address. By default, IP address calculation for client-side telemetry occurs at the ingestion endpoint in Azure. -To keep the entire IP address calculated from your custom logic, you could use a telemetry initializer that would copy the IP address data that you provided in `ai.location.ip` to a separate custom field. But again, unlike the server-side SDKs, the client-side SDK won't calculate the address for you if it can't rely on third-party libraries or your own custom logic. +If you want to calculate the IP address directly on the client side, you need to add your own custom logic and use the result to set the `ai.location.ip` tag. When `ai.location.ip` is set, the ingestion endpoint doesn't perform IP address calculation, and the provided IP address is used for the geolocation lookup. In this scenario, the IP address is still zeroed out by default. +To keep the entire IP address calculated from your custom logic, you could use a telemetry initializer that would copy the IP address data that you provided in `ai.location.ip` to a separate custom field. But again, unlike the server-side SDKs, the client-side SDK won't calculate the address for you if it can't rely on third-party libraries or your own custom logic. ```javascript appInsights.addTelemetryInitializer((item) => { appInsights.addTelemetryInitializer((item) => { ``` -If client-side data traverses a proxy before forwarding to the ingestion endpoint, IP address calculation might show the IP address of the proxy and not the client. +If client-side data traverses a proxy before forwarding to the ingestion endpoint, IP address calculation might show the IP address of the proxy and not the client. requests | project appName, operation_Name, url, resultCode, client_IP, customDimensions.["client-ip"] ``` -Newly collected IP addresses will appear in the `customDimensions_client-ip` column. The default `client-ip` column will still have all four octets zeroed out. +Newly collected IP addresses will appear in the `customDimensions_client-ip` column. The default `client-ip` column will still have all four octets zeroed out. If you're testing from localhost, and the value for `customDimensions_client-ip` is `::1`, this value is expected behavior. The `::1` value represents the loopback address in IPv6. It's equivalent to `127.0.0.1` in IPv4. ## Next steps * Learn more about [personal data collection](../logs/personal-data-mgmt.md) in Application Insights.--* Learn more about how [IP address collection](https://apmtips.com/posts/2016-07-05-client-ip-address/) in Application Insights works. This article an older external blog post written by one of our engineers. It predates the current default behavior where the IP address is recorded as `0.0.0.0`, but it goes into greater depth on the mechanics of the built-in telemetry initializer. +* Learn more about how [IP address collection](https://apmtips.com/posts/2016-07-05-client-ip-address/) works in Application Insights. This article is an older external blog post written by one of our engineers. It predates the current default behavior where the IP address is recorded as `0.0.0.0`. The article goes into greater depth on the mechanics of the built-in telemetry initializer. |
azure-monitor | Live Stream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md | Next, add the following line before the call `services.AddApplicationInsightsTel services.ConfigureTelemetryModule<QuickPulseTelemetryModule> ((module, o) => module.AuthenticationApiKey = "YOUR-API-KEY-HERE"); ``` -More information on configuring WorkerService applications can be found in our guidance on [configuring telemetry modules in WorkerServices](./worker-service.md#configuring-or-removing-default-telemetrymodules). +More information on configuring WorkerService applications can be found in our guidance on [configuring telemetry modules in WorkerServices](./worker-service.md#configure-or-remove-default-telemetry-modules). #### Azure Function Apps |
azure-monitor | Sampling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md | Metric counts such as request rate and exception rate are adjusted to compensate > [!NOTE] > This section applies to ASP.NET applications, not to ASP.NET Core applications. [Learn about configuring adaptive sampling for ASP.NET Core applications later in this document.](#configuring-adaptive-sampling-for-aspnet-core-applications) -> With ASP.NET Core and with Microsoft.ApplicationInsights.AspNetCore >= 2.15.0 you can configure AppInsights options via appsettings.json - In [`ApplicationInsights.config`](./configuration-with-applicationinsights-config.md), you can adjust several parameters in the `AdaptiveSamplingTelemetryProcessor` node. The figures shown are the default values: * `<MaxTelemetryItemsPerSecond>5</MaxTelemetryItemsPerSecond>` builder.UseAdaptiveSampling(maxTelemetryItemsPerSecond:5, excludedTypes: "Depend ### Configuring adaptive sampling for ASP.NET Core applications -There's no `ApplicationInsights.config` for ASP.NET Core applications, so all configuration is done via code. +ASP.NET Core applications may be configured in code or through the `appsettings.json` file. For more information, see [Configuration in ASP.NET Core](https://learn.microsoft.com/aspnet/core/fundamentals/configuration). + Adaptive sampling is enabled by default for all ASP.NET Core applications. You can disable or customize the sampling behavior. #### Turning off adaptive sampling |
azure-monitor | Worker Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md | -[Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is a new SDK, which is best suited for non-HTTP workloads like messaging, background tasks, console applications etc. These types of applications don't have the notion of an incoming HTTP request like a traditional ASP.NET/ASP.NET Core Web Application, and hence using Application Insights packages for [ASP.NET](asp-net.md) or [ASP.NET Core](asp-net-core.md) applications isn't supported. +[Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is a new SDK, which is best suited for non-HTTP workloads like messaging, background tasks, and console applications. These types of applications don't have the notion of an incoming HTTP request like a traditional ASP.NET/ASP.NET Core web application. For this reason, using Application Insights packages for [ASP.NET](asp-net.md) or [ASP.NET Core](asp-net-core.md) applications isn't supported. -The new SDK doesn't do any telemetry collection by itself. Instead, it brings in other well known Application Insights auto collectors like [DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector/), [PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector/), [ApplicationInsightsLoggingProvider](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights) etc. This SDK exposes extension methods on `IServiceCollection` to enable and configure telemetry collection. +The new SDK doesn't do any telemetry collection by itself. Instead, it brings in other well-known Application Insights auto collectors like [DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector/), [PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector/), and [ApplicationInsightsLoggingProvider](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights). This SDK exposes extension methods on `IServiceCollection` to enable and configure telemetry collection. ## Supported scenarios -The [Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is best suited for non-HTTP applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. This package can be used in the newly introduced [.NET Core Worker Service](https://devblogs.microsoft.com/aspnet/dotnet-core-workers-in-azure-container-instances), [background tasks in ASP.NET Core](/aspnet/core/fundamentals/host/hosted-services), Console apps (.NET Core/ .NET Framework), etc. +The [Application Insights SDK for Worker Service](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) is best suited for non-HTTP applications no matter where or how they run. If your application is running and has network connectivity to Azure, telemetry can be collected. Application Insights monitoring is supported everywhere .NET Core is supported. This package can be used in the newly introduced [.NET Core Worker Service](https://devblogs.microsoft.com/aspnet/dotnet-core-workers-in-azure-container-instances), [background tasks in ASP.NET Core](/aspnet/core/fundamentals/host/hosted-services), and console apps like .NET Core and .NET Framework. ## Prerequisites -A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md). +You must have a valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md). [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] -## Using Application Insights SDK for Worker Services +## Use Application Insights SDK for Worker Service 1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.- The following snippet shows the changes that need to be added to your project's `.csproj` file. --```xml - <ItemGroup> - <PackageReference Include="Microsoft.ApplicationInsights.WorkerService" Version="2.13.1" /> - </ItemGroup> -``` + The following snippet shows the changes that must be added to your project's `.csproj` file: + + ```xml + <ItemGroup> + <PackageReference Include="Microsoft.ApplicationInsights.WorkerService" Version="2.13.1" /> + </ItemGroup> + ``` -1. Configure the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable or in configuration. (`appsettings.json`) +1. Configure the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable or in configuration (`appsettings.json`). + :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot displaying Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png"::: -1. Retrieve an `ILogger` instance or `TelemetryClient` instance from the Dependency Injection (DI) container by calling `serviceProvider.GetRequiredService<TelemetryClient>();` or using Constructor Injection. This step will trigger setting up of `TelemetryConfiguration` and auto collection modules. +1. Retrieve an `ILogger` instance or `TelemetryClient` instance from the Dependency Injection (DI) container by calling `serviceProvider.GetRequiredService<TelemetryClient>();` or by using Constructor Injection. This step will trigger setting up of `TelemetryConfiguration` and auto-collection modules. Specific instructions for each type of application are described in the following sections. -## .NET Core LTS worker service application --Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService) --1. Download and install .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). -2. Create a new Worker Service project either by using Visual Studio new project template or command line `dotnet new worker` -3. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application. +## .NET Core LTS Worker Service application -4. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `CreateHostBuilder()` method in your `Program.cs` class, as in this example: +The full example is shared at the [NuGet website](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService). -```csharp - public static IHostBuilder CreateHostBuilder(string[] args) => - Host.CreateDefaultBuilder(args) - .ConfigureServices((hostContext, services) => - { - services.AddHostedService<Worker>(); - services.AddApplicationInsightsTelemetryWorkerService(); - }); -``` --5. Modify your `Worker.cs` as per below example. +1. Download and install .NET Core [Long Term Support (LTS)](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). +1. Create a new Worker Service project either by using a Visual Studio new project template or the command line `dotnet new worker`. +1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application. -```csharp - using Microsoft.ApplicationInsights; - using Microsoft.ApplicationInsights.DataContracts; +1. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `CreateHostBuilder()` method in your `Program.cs` class, as in this example: - public class Worker : BackgroundService - { - private readonly ILogger<Worker> _logger; - private TelemetryClient _telemetryClient; - private static HttpClient _httpClient = new HttpClient(); + ```csharp + public static IHostBuilder CreateHostBuilder(string[] args) => + Host.CreateDefaultBuilder(args) + .ConfigureServices((hostContext, services) => + { + services.AddHostedService<Worker>(); + services.AddApplicationInsightsTelemetryWorkerService(); + }); + ``` - public Worker(ILogger<Worker> logger, TelemetryClient tc) - { - _logger = logger; - _telemetryClient = tc; - } +1. Modify your `Worker.cs` as per the following example: - protected override async Task ExecuteAsync(CancellationToken stoppingToken) + ```csharp + using Microsoft.ApplicationInsights; + using Microsoft.ApplicationInsights.DataContracts; + + public class Worker : BackgroundService {- while (!stoppingToken.IsCancellationRequested) + private readonly ILogger<Worker> _logger; + private TelemetryClient _telemetryClient; + private static HttpClient _httpClient = new HttpClient(); + + public Worker(ILogger<Worker> logger, TelemetryClient tc) {- _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now); -- using (_telemetryClient.StartOperation<RequestTelemetry>("operation")) + _logger = logger; + _telemetryClient = tc; + } + + protected override async Task ExecuteAsync(CancellationToken stoppingToken) + { + while (!stoppingToken.IsCancellationRequested) {- _logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights"); - _logger.LogInformation("Calling bing.com"); - var res = await _httpClient.GetAsync("https://bing.com"); - _logger.LogInformation("Calling bing completed with status:" + res.StatusCode); - _telemetryClient.TrackEvent("Bing call event completed"); + _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now); + + using (_telemetryClient.StartOperation<RequestTelemetry>("operation")) + { + _logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights"); + _logger.LogInformation("Calling bing.com"); + var res = await _httpClient.GetAsync("https://bing.com"); + _logger.LogInformation("Calling bing completed with status:" + res.StatusCode); + _telemetryClient.TrackEvent("Bing call event completed"); + } + + await Task.Delay(1000, stoppingToken); }-- await Task.Delay(1000, stoppingToken); } }- } -``` + ``` -6. Set up the connection string. +1. Set up the connection string. + :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png"::: -> [!NOTE] -> We recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing. + > [!NOTE] + > We recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing. -```json - { - "ApplicationInsights": + ```json {- "ConnectionString" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;" - }, - "Logging": - { - "LogLevel": + "ApplicationInsights": + { + "ConnectionString" : "InstrumentationKey=00000000-0000-0000-0000-000000000000;" + }, + "Logging": {- "Default": "Warning" + "LogLevel": + { + "Default": "Warning" + } } }- } -``` + ``` Alternatively, specify the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable. -Typically `APPLICATIONINSIGHTS_CONNECTION_STRING` specifies the connection string for applications deployed to Web Apps as Web Jobs. +Typically, `APPLICATIONINSIGHTS_CONNECTION_STRING` specifies the connection string for applications deployed to web apps as web jobs. > [!NOTE] > A connection string specified in code takes precedence over the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, which takes precedence over other options. ## ASP.NET Core background tasks with hosted services -[This](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio) document describes how to create backgrounds tasks in ASP.NET Core application. +[This document](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio) describes how to create background tasks in an ASP.NET Core application. -Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService) +The full example is shared at this [GitHub page](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService). 1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application.-2. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `ConfigureServices()` method, as in this example: --```csharp - public static async Task Main(string[] args) - { - var host = new HostBuilder() - .ConfigureAppConfiguration((hostContext, config) => - { - config.AddJsonFile("appsettings.json", optional: true); - }) - .ConfigureServices((hostContext, services) => - { - services.AddLogging(); - services.AddHostedService<TimedHostedService>(); -- // connection string is read automatically from appsettings.json - services.AddApplicationInsightsTelemetryWorkerService(); - }) - .UseConsoleLifetime() - .Build(); +1. Add `services.AddApplicationInsightsTelemetryWorkerService();` to the `ConfigureServices()` method, as in this example: - using (host) + ```csharp + public static async Task Main(string[] args) {- // Start the host - await host.StartAsync(); -- // Wait for the host to shutdown - await host.WaitForShutdownAsync(); - } - } -``` --Following is the code for `TimedHostedService` where the background task logic resides. --```csharp - using Microsoft.ApplicationInsights; - using Microsoft.ApplicationInsights.DataContracts; -- public class TimedHostedService : IHostedService, IDisposable - { - private readonly ILogger _logger; - private Timer _timer; - private TelemetryClient _telemetryClient; - private static HttpClient httpClient = new HttpClient(); -- public TimedHostedService(ILogger<TimedHostedService> logger, TelemetryClient tc) - { - _logger = logger; - this._telemetryClient = tc; + var host = new HostBuilder() + .ConfigureAppConfiguration((hostContext, config) => + { + config.AddJsonFile("appsettings.json", optional: true); + }) + .ConfigureServices((hostContext, services) => + { + services.AddLogging(); + services.AddHostedService<TimedHostedService>(); + + // connection string is read automatically from appsettings.json + services.AddApplicationInsightsTelemetryWorkerService(); + }) + .UseConsoleLifetime() + .Build(); + + using (host) + { + // Start the host + await host.StartAsync(); + + // Wait for the host to shutdown + await host.WaitForShutdownAsync(); + } }+ ``` - public Task StartAsync(CancellationToken cancellationToken) - { - _logger.LogInformation("Timed Background Service is starting."); -- _timer = new Timer(DoWork, null, TimeSpan.Zero, - TimeSpan.FromSeconds(1)); + The following code is for `TimedHostedService`, where the background task logic resides: - return Task.CompletedTask; - } -- private void DoWork(object state) + ```csharp + using Microsoft.ApplicationInsights; + using Microsoft.ApplicationInsights.DataContracts; + + public class TimedHostedService : IHostedService, IDisposable {- _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now); -- using (_telemetryClient.StartOperation<RequestTelemetry>("operation")) + private readonly ILogger _logger; + private Timer _timer; + private TelemetryClient _telemetryClient; + private static HttpClient httpClient = new HttpClient(); + + public TimedHostedService(ILogger<TimedHostedService> logger, TelemetryClient tc) + { + _logger = logger; + this._telemetryClient = tc; + } + + public Task StartAsync(CancellationToken cancellationToken) + { + _logger.LogInformation("Timed Background Service is starting."); + + _timer = new Timer(DoWork, null, TimeSpan.Zero, + TimeSpan.FromSeconds(1)); + + return Task.CompletedTask; + } + + private void DoWork(object state) {- _logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights"); - _logger.LogInformation("Calling bing.com"); - var res = httpClient.GetAsync("https://bing.com").GetAwaiter().GetResult(); - _logger.LogInformation("Calling bing completed with status:" + res.StatusCode); - _telemetryClient.TrackEvent("Bing call event completed"); + _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now); + + using (_telemetryClient.StartOperation<RequestTelemetry>("operation")) + { + _logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights"); + _logger.LogInformation("Calling bing.com"); + var res = httpClient.GetAsync("https://bing.com").GetAwaiter().GetResult(); + _logger.LogInformation("Calling bing completed with status:" + res.StatusCode); + _telemetryClient.TrackEvent("Bing call event completed"); + } } }- } -``` + ``` -3. Set up the connection string. - Use the same `appsettings.json` from the .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service example above. +1. Set up the connection string. + Use the same `appsettings.json` from the preceding .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service example. -## .NET Core/.NET Framework Console application +## .NET Core/.NET Framework console application -As mentioned in the beginning of this article, the new package can be used to enable Application Insights Telemetry from even a regular console application. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), and hence can be used for console apps in .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher, and .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher. +As mentioned in the beginning of this article, the new package can be used to enable Application Insights telemetry from even a regular console application. This package targets [`NetStandard2.0`](/dotnet/standard/net-standard), so it can be used for console apps in .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher, and .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or higher. -Full example is shared [here](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp) +The full example is shared at this [GitHub page](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp). 1. Install the [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService) package to the application. -2. Modify Program.cs as below example. +1. Modify *Program.cs* as shown in the following example: -```csharp - using Microsoft.ApplicationInsights; - using Microsoft.ApplicationInsights.DataContracts; - using Microsoft.Extensions.DependencyInjection; - using Microsoft.Extensions.Logging; - using System; - using System.Net.Http; - using System.Threading.Tasks; -- namespace WorkerSDKOnConsole - { - class Program + ```csharp + using Microsoft.ApplicationInsights; + using Microsoft.ApplicationInsights.DataContracts; + using Microsoft.Extensions.DependencyInjection; + using Microsoft.Extensions.Logging; + using System; + using System.Net.Http; + using System.Threading.Tasks; + + namespace WorkerSDKOnConsole {- static async Task Main(string[] args) + class Program {- // Create the DI container. - IServiceCollection services = new ServiceCollection(); -- // Being a regular console app, there is no appsettings.json or configuration providers enabled by default. - // Hence instrumentation key/ connection string and any changes to default logging level must be specified here. - services.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("Category", LogLevel.Information)); - services.AddApplicationInsightsTelemetryWorkerService("instrumentation key here"); -- // To pass a connection string - // - aiserviceoptions must be created - // - set connectionstring on it - // - pass it to AddApplicationInsightsTelemetryWorkerService() -- // Build ServiceProvider. - IServiceProvider serviceProvider = services.BuildServiceProvider(); -- // Obtain logger instance from DI. - ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>(); -- // Obtain TelemetryClient instance from DI, for additional manual tracking or to flush. - var telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>(); -- var httpClient = new HttpClient(); -- while (true) // This app runs indefinitely. replace with actual application termination logic. + static async Task Main(string[] args) {- logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now); -- // Replace with a name which makes sense for this operation. - using (telemetryClient.StartOperation<RequestTelemetry>("operation")) + // Create the DI container. + IServiceCollection services = new ServiceCollection(); + + // Being a regular console app, there is no appsettings.json or configuration providers enabled by default. + // Hence instrumentation key/ connection string and any changes to default logging level must be specified here. + services.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("Category", LogLevel.Information)); + services.AddApplicationInsightsTelemetryWorkerService("instrumentation key here"); + + // To pass a connection string + // - aiserviceoptions must be created + // - set connectionstring on it + // - pass it to AddApplicationInsightsTelemetryWorkerService() + + // Build ServiceProvider. + IServiceProvider serviceProvider = services.BuildServiceProvider(); + + // Obtain logger instance from DI. + ILogger<Program> logger = serviceProvider.GetRequiredService<ILogger<Program>>(); + + // Obtain TelemetryClient instance from DI, for additional manual tracking or to flush. + var telemetryClient = serviceProvider.GetRequiredService<TelemetryClient>(); + + var httpClient = new HttpClient(); + + while (true) // This app runs indefinitely. Replace with actual application termination logic. {- logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights"); - logger.LogInformation("Calling bing.com"); - var res = await httpClient.GetAsync("https://bing.com"); - logger.LogInformation("Calling bing completed with status:" + res.StatusCode); - telemetryClient.TrackEvent("Bing call event completed"); + logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now); + + // Replace with a name which makes sense for this operation. + using (telemetryClient.StartOperation<RequestTelemetry>("operation")) + { + logger.LogWarning("A sample warning message. By default, logs with severity Warning or higher is captured by Application Insights"); + logger.LogInformation("Calling bing.com"); + var res = await httpClient.GetAsync("https://bing.com"); + logger.LogInformation("Calling bing completed with status:" + res.StatusCode); + telemetryClient.TrackEvent("Bing call event completed"); + } + + await Task.Delay(1000); }-- await Task.Delay(1000); + + // Explicitly call Flush() followed by sleep is required in console apps. + // This is to ensure that even if application terminates, telemetry is sent to the back-end. + telemetryClient.Flush(); + Task.Delay(5000).Wait(); }-- // Explicitly call Flush() followed by sleep is required in Console Apps. - // This is to ensure that even if application terminates, telemetry is sent to the back-end. - telemetryClient.Flush(); - Task.Delay(5000).Wait(); } }- } -``` + ``` -This console application also uses the same default `TelemetryConfiguration`, and it can be customized in the same way as the examples in earlier section. +This console application also uses the same default `TelemetryConfiguration`. It can be customized in the same way as the examples in earlier sections. ## Run your application -Run your application. The example workers from all of the above makes an http call every second to bing.com, and also emits few logs using `ILogger`. These lines are wrapped inside `StartOperation` call of `TelemetryClient`, which is used to create an operation (in this example `RequestTelemetry` named "operation"). Application Insights will collect these ILogger logs (warning or above by default) and dependencies, and they'll be correlated to the `RequestTelemetry` with parent-child relationship. The correlation also works cross process/network boundary. For example, if the call was made to another monitored component, then it will be correlated to this parent as well. +Run your application. The workers from all the preceding examples make an HTTP call every second to bing.com and also emit few logs by using `ILogger`. These lines are wrapped inside the `StartOperation` call of `TelemetryClient`, which is used to create an operation. In this example, `RequestTelemetry` is named "operation." ++Application Insights collects these ILogger logs, with a severity of Warning or above by default, and dependencies. They're correlated to `RequestTelemetry` with a parent-child relationship. Correlation also works across process/network boundaries. For example, if the call was made to another monitored component, it's correlated to this parent as well. -This custom operation of `RequestTelemetry` can be thought of as the equivalent of an incoming web request in a typical Web Application. While it isn't necessary to use an Operation, it fits best with the [Application Insights correlation data model](./correlation.md) - with `RequestTelemetry` acting as the parent operation, and every telemetry generated inside the worker iteration being treated as logically belonging to the same operation. This approach also ensures all the telemetry generated (automatic and manual) will have the same `operation_id`. As sampling is based on `operation_id`, sampling algorithm either keeps or drops all of the telemetry from a single iteration. +This custom operation of `RequestTelemetry` can be thought of as the equivalent of an incoming web request in a typical web application. It isn't necessary to use an operation, but it fits best with the [Application Insights correlation data model](./correlation.md). `RequestTelemetry` acts as the parent operation and every telemetry generated inside the worker iteration is treated as logically belonging to the same operation. -The following lists the full telemetry automatically collected by Application Insights. +This approach also ensures all the telemetry generated, both automatic and manual, will have the same `operation_id`. Because sampling is based on `operation_id`, the sampling algorithm either keeps or drops all the telemetry from a single iteration. ++The following sections list the full telemetry automatically collected by Application Insights. ### Live Metrics -[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. While it might take a few minutes before telemetry starts appearing in the portal and analytics, Live Metrics would show CPU usage of the running process in near real-time. It can also show other telemetry like Requests, Dependencies, Traces etc. +[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. Although it might take a few minutes before telemetry appears in the portal and analytics, Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry like Requests, Dependencies, and Traces. ### ILogger logs -Logs emitted via `ILogger` of severity `Warning` or greater are automatically captured. Follow [ILogger docs](ilogger.md#logging-level) to customize which log levels are captured by Application Insights. +Logs emitted via `ILogger` with the severity Warning or greater are automatically captured. Follow [ILogger docs](ilogger.md#logging-level) to customize which log levels are captured by Application Insights. ### Dependencies -Dependency collection is enabled by default. [This](asp-net-dependencies.md#automatically-tracked-dependencies) article explains the dependencies that are automatically collected, and also contain steps to do manual tracking. +Dependency collection is enabled by default. The article [Dependency tracking in Application Insights](asp-net-dependencies.md#automatically-tracked-dependencies) explains the dependencies that are automatically collected and also contains steps to do manual tracking. ### EventCounter -`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on customizing the list. +`EventCounterCollectionModule` is enabled by default, and it will collect a default set of counters from .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) apps. The [EventCounter](eventcounters.md) tutorial lists the default set of counters collected. It also has instructions on how to customize the list. -### Manually tracking other telemetry +### Manually track other telemetry -While the SDK automatically collects telemetry as explained above, in most cases user will need to send other telemetry to Application Insights service. The recommended way to track other telemetry is by obtaining an instance of `TelemetryClient` from Dependency Injection, and then calling one of the supported `TrackXXX()` [API](api-custom-events-metrics.md) methods on it. Another typical use case is [custom tracking of operations](custom-operations-tracking.md). This approach is demonstrated in the Worker examples above. +Although the SDK automatically collects telemetry as explained, in most cases, you'll need to send other telemetry to Application Insights. The recommended way to track other telemetry is by obtaining an instance of `TelemetryClient` from Dependency Injection and then calling one of the supported `TrackXXX()` [API](api-custom-events-metrics.md) methods on it. Another typical use case is [custom tracking of operations](custom-operations-tracking.md). This approach is demonstrated in the preceding worker examples. ## Configure the Application Insights SDK -The default `TelemetryConfiguration` used by the worker service SDK is similar to the automatic configuration used in a ASP.NET or ASP.NET Core application, minus the TelemetryInitializers used to enrich telemetry from `HttpContext`. +The default `TelemetryConfiguration` used by the Worker Service SDK is similar to the automatic configuration used in an ASP.NET or ASP.NET Core application, minus the telemetry initializers used to enrich telemetry from `HttpContext`. -You can customize the Application Insights SDK for Worker Service to change the default configuration. Users of the Application Insights ASP.NET Core SDK might be familiar with changing configuration by using ASP.NET Core built-in [dependency injection](/aspnet/core/fundamentals/dependency-injection). The WorkerService SDK is also based on similar principles. Make almost all configuration changes in the `ConfigureServices()` section by calling appropriate methods on `IServiceCollection`, as detailed below. +You can customize the Application Insights SDK for Worker Service to change the default configuration. Users of the Application Insights ASP.NET Core SDK might be familiar with changing configuration by using ASP.NET Core built-in [dependency injection](/aspnet/core/fundamentals/dependency-injection). The Worker Service SDK is also based on similar principles. Make almost all configuration changes in the `ConfigureServices()` section by calling appropriate methods on `IServiceCollection`, as detailed in the next section. > [!NOTE]-> While using this SDK, changing configuration by modifying `TelemetryConfiguration.Active` isn't supported, and changes will not be reflected. +> When you use this SDK, changing configuration by modifying `TelemetryConfiguration.Active` isn't supported and changes won't be reflected. -### Using ApplicationInsightsServiceOptions +### Use ApplicationInsightsServiceOptions You can modify a few common settings by passing `ApplicationInsightsServiceOptions` to `AddApplicationInsightsTelemetryWorkerService`, as in this example: public void ConfigureServices(IServiceCollection services) The `ApplicationInsightsServiceOptions` in this SDK is in the namespace `Microsoft.ApplicationInsights.WorkerService` as opposed to `Microsoft.ApplicationInsights.AspNetCore.Extensions` in the ASP.NET Core SDK. -Commonly used settings in `ApplicationInsightsServiceOptions` +The following table lists commonly used settings in `ApplicationInsightsServiceOptions`. |Setting | Description | Default ||-|--|EnableQuickPulseMetricStream | Enable/Disable LiveMetrics feature | true -|EnableAdaptiveSampling | Enable/Disable Adaptive Sampling | true -|EnableHeartbeat | Enable/Disable Heartbeats feature, which periodically (15-min default) sends a custom metric named 'HeartBeatState' with information about the runtime like .NET Version, Azure Environment information, if applicable, etc. | true -|AddAutoCollectedMetricExtractor | Enable/Disable AutoCollectedMetrics extractor, which is a TelemetryProcessor that sends pre-aggregated metrics about Requests/Dependencies before sampling takes place. | true -|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling this setting will cause the following settings to be ignored; `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, `EnableAppServicesHeartbeatTelemetryModule` | true +|EnableQuickPulseMetricStream | Enable/Disable the Live Metrics feature. | True +|EnableAdaptiveSampling | Enable/Disable Adaptive Sampling. | True +|EnableHeartbeat | Enable/Disable the Heartbeats feature, which periodically (15-min default) sends a custom metric named "HeartBeatState" with information about the runtime like .NET version and Azure environment, if applicable. | True +|AddAutoCollectedMetricExtractor | Enable/Disable the AutoCollectedMetrics extractor, which is a telemetry processor that sends pre-aggregated metrics about Requests/Dependencies before sampling takes place. | True +|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling this setting will cause the following settings to be ignored: `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, and `EnableAppServicesHeartbeatTelemetryModule`. | True -See the [configurable settings in `ApplicationInsightsServiceOptions`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs) for the most up-to-date list. +For the most up-to-date list, see the [configurable settings in `ApplicationInsightsServiceOptions`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs). ### Sampling -The Application Insights SDK for Worker Service supports both fixed-rate and adaptive sampling. Adaptive sampling is enabled by default. Sampling can be disabled by using `EnableAdaptiveSampling` option in [ApplicationInsightsServiceOptions](#using-applicationinsightsserviceoptions) +The Application Insights SDK for Worker Service supports both fixed-rate and adaptive sampling. Adaptive sampling is enabled by default. Sampling can be disabled by using the `EnableAdaptiveSampling` option in [ApplicationInsightsServiceOptions](#use-applicationinsightsserviceoptions). -To configure other sampling settings, the following example can be used. +To configure other sampling settings, you can use the following example: ```csharp using Microsoft.ApplicationInsights.Extensibility; public void ConfigureServices(IServiceCollection services) services.AddApplicationInsightsTelemetryWorkerService(aiOptions); // Add Adaptive Sampling with custom settings.- // the following adds adaptive sampling with 15 items per sec. + // The following adds adaptive sampling with 15 items per sec. services.Configure<TelemetryConfiguration>((telemetryConfig) => { var builder = telemetryConfig.DefaultTelemetrySink.TelemetryProcessorChainBuilder; public void ConfigureServices(IServiceCollection services) } ``` -More information can be found in the [Sampling](#sampling) document. +For more information, see the [Sampling](#sampling) document. -### Adding TelemetryInitializers +### Add telemetry initializers Use [telemetry initializers](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) when you want to define properties that are sent with all telemetry. -Add any new `TelemetryInitializer` to the `DependencyInjection` container and SDK will automatically add them to the `TelemetryConfiguration`. +Add any new telemetry initializer to the `DependencyInjection` container and the SDK automatically adds them to `TelemetryConfiguration`. ```csharp using Microsoft.ApplicationInsights.Extensibility; Add any new `TelemetryInitializer` to the `DependencyInjection` container and SD } ``` -### Removing TelemetryInitializers +### Remove telemetry initializers Telemetry initializers are present by default. To remove all or specific telemetry initializers, use the following sample code *after* calling `AddApplicationInsightsTelemetryWorkerService()`. Telemetry initializers are present by default. To remove all or specific telemet public void ConfigureServices(IServiceCollection services) { services.AddApplicationInsightsTelemetryWorkerService();- // Remove a specific built-in telemetry initializer + // Remove a specific built-in telemetry initializer. var tiToRemove = services.FirstOrDefault<ServiceDescriptor> (t => t.ImplementationType == typeof(AspNetCoreEnvironmentTelemetryInitializer)); if (tiToRemove != null) Telemetry initializers are present by default. To remove all or specific telemet services.Remove(tiToRemove); } - // Remove all initializers + // Remove all initializers. // This requires importing namespace by using Microsoft.Extensions.DependencyInjection.Extensions; services.RemoveAll(typeof(ITelemetryInitializer)); } ``` -### Adding telemetry processors +### Add telemetry processors -You can add custom telemetry processors to `TelemetryConfiguration` by using the extension method `AddApplicationInsightsTelemetryProcessor` on `IServiceCollection`. You use telemetry processors in [advanced filtering scenarios](./api-filtering-sampling.md#itelemetryprocessor-and-itelemetryinitializer) to allow for more direct control over what's included or excluded from the telemetry you send to the Application Insights service. Use the following example. +You can add custom telemetry processors to `TelemetryConfiguration` by using the extension method `AddApplicationInsightsTelemetryProcessor` on `IServiceCollection`. You use telemetry processors in [advanced filtering scenarios](./api-filtering-sampling.md#itelemetryprocessor-and-itelemetryinitializer) to allow for more direct control over what's included or excluded from the telemetry you send to Application Insights. Use the following example: ```csharp public void ConfigureServices(IServiceCollection services) You can add custom telemetry processors to `TelemetryConfiguration` by using the } ``` -### Configuring or removing default TelemetryModules +### Configure or remove default telemetry modules Application Insights uses telemetry modules to automatically collect telemetry about specific workloads without requiring manual tracking. -The following automatic-collection modules are enabled by default. These modules are responsible for automatically collecting telemetry. You can disable or configure them to alter their default behavior. +The following auto-collection modules are enabled by default. These modules are responsible for automatically collecting telemetry. You can disable or configure them to alter their default behavior. * `DependencyTrackingTelemetryModule` * `PerformanceCollectorModule` * `QuickPulseTelemetryModule`-* `AppServicesHeartbeatTelemetryModule` - (There's currently an issue involving this telemetry module. For a temporary workaround, see [GitHub Issue 1689](https://github.com/microsoft/ApplicationInsights-dotnet/issues/1689 +* `AppServicesHeartbeatTelemetryModule` (There's currently an issue involving this telemetry module. For a temporary workaround, see [GitHub Issue 1689](https://github.com/microsoft/ApplicationInsights-dotnet/issues/1689 ).) * `AzureInstanceMetadataTelemetryModule` -To configure any default `TelemetryModule`, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example. +To configure any default telemetry module, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example: ```csharp using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse; To configure any default `TelemetryModule`, use the extension method `ConfigureT } ``` -### Configuring telemetry channel +### Configure the telemetry channel -The default channel is `ServerTelemetryChannel`. You can override it as the following example shows. +The default channel is `ServerTelemetryChannel`. You can override it as the following example shows: ```csharp using Microsoft.ApplicationInsights.Channel; using Microsoft.ApplicationInsights.Channel; ### Disable telemetry dynamically -If you want to disable telemetry conditionally and dynamically, you may resolve `TelemetryConfiguration` instance with ASP.NET Core dependency injection container anywhere in your code and set `DisableTelemetry` flag on it. +If you want to disable telemetry conditionally and dynamically, you can resolve the `TelemetryConfiguration` instance with an ASP.NET Core dependency injection container anywhere in your code and set the `DisableTelemetry` flag on it. ```csharp public void ConfigureServices(IServiceCollection services) If you want to disable telemetry conditionally and dynamically, you may resolve ## Frequently asked questions +This section provides answers to common questions. + ### Which package should I use? -| .Net Core app scenario | Package | +| .NET Core app scenario | Package | ||| | Without HostedServices | AspNetCore | | With HostedServices | AspNetCore (not WorkerService) | If you want to disable telemetry conditionally and dynamically, you may resolve ### Can HostedServices inside a .NET Core app using the AspNetCore package have TelemetryClient injected to it? -* Yes. The config will be shared with the rest of the web application. +Yes. The configuration will be shared with the rest of the web application. ### How can I track telemetry that's not automatically collected? -Get an instance of `TelemetryClient` by using constructor injection, and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` instances. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with rest of the telemetry. Creating a new `TelemetryClient` instance is recommended only if it needs a configuration that's separate from the rest of the telemetry. +Get an instance of `TelemetryClient` by using constructor injection and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` instances. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with the rest of the telemetry. Creating a new `TelemetryClient` instance is recommended only if it needs a configuration that's separate from the rest of the telemetry. ### Can I use Visual Studio IDE to onboard Application Insights to a Worker Service project? -Visual Studio IDE onboarding is currently supported only for ASP.NET/ASP.NET Core Applications. This document will be updated when Visual Studio ships support for onboarding Worker service applications. +Visual Studio IDE onboarding is currently supported only for ASP.NET/ASP.NET Core applications. This document will be updated when Visual Studio ships support for onboarding Worker Service applications. ### Can I enable Application Insights monitoring by using tools like Azure Monitor Application Insights Agent (formerly Status Monitor v2)? -No, [Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) currently supports .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) only. +No. [Azure Monitor Application Insights Agent](./status-monitor-v2-overview.md) currently supports .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) only. ### Are all features supported if I run my application in Linux? Yes. Feature support for this SDK is the same in all platforms, with the followi * Performance counters are supported only in Windows except for Process CPU/Memory shown in Live Metrics. * Even though `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel: -```csharp -using Microsoft.ApplicationInsights.Channel; -using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel; -- public void ConfigureServices(IServiceCollection services) - { - // The following will configure the channel to use the given folder to temporarily - // store telemetry items during network or Application Insights server issues. - // User should ensure that the given folder already exists - // and that the application has read/write permissions. - services.AddSingleton(typeof(ITelemetryChannel), - new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); - services.AddApplicationInsightsTelemetryWorkerService(); - } -``` + ```csharp + using Microsoft.ApplicationInsights.Channel; + using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel; ++ public void ConfigureServices(IServiceCollection services) + { + // The following will configure the channel to use the given folder to temporarily + // store telemetry items during network or Application Insights server issues. + // User should ensure that the given folder already exists + // and that the application has read/write permissions. + services.AddSingleton(typeof(ITelemetryChannel), + new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); + services.AddApplicationInsightsTelemetryWorkerService(); + } + ``` ## Sample applications -[.NET Core Console Application](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp) -Use this sample if you're using a Console Application written in either .NET Core (2.0 or higher) or .NET Framework (4.7.2 or higher) +[.NET Core console application](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/ConsoleApp): +Use this sample if you're using a console application written in either .NET Core (2.0 or higher) or .NET Framework (4.7.2 or higher). -[ASP.NET Core background tasks with HostedServices](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService) -Use this sample if you are in ASP.NET Core, and creating background tasks as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services) +[ASP.NET Core background tasks with HostedServices](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/BackgroundTasksWithHostedService): +Use this sample if you're in ASP.NET Core and creating background tasks in accordance with [official guidance](/aspnet/core/fundamentals/host/hosted-services). -[.NET Core Worker Service](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService) -Use this sample if you have a .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service application as per official guidance [here](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio#worker-service-template) +[.NET Core Worker Service](https://github.com/microsoft/ApplicationInsights-dotnet/tree/develop/examples/WorkerService): +Use this sample if you have a .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) Worker Service application in accordance with [official guidance](/aspnet/core/fundamentals/host/hosted-services?tabs=visual-studio#worker-service-template). ## Open-source SDK -* [Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet). +[Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet). -For the latest updates and bug fixes, [consult the release notes](./release-notes.md). +For the latest updates and bug fixes, [see the Release Notes](./release-notes.md). ## Next steps * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * [Track more dependencies not automatically tracked](./auto-collect-dependencies.md).-* [Enrich or Filter auto collected telemetry](./api-filtering-sampling.md). +* [Enrich or filter auto-collected telemetry](./api-filtering-sampling.md). * [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection). |
azure-monitor | Container Insights Metric Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md | There are two types of metric rules used by Container insights based on either P ### Prerequisites -Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). For more information, see [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md). +Your cluster must be configured to send metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). For more information, see [Collect Prometheus metrics with Container insights](container-insights-prometheus-metrics-addon.md). ### Enable alert rules |
azure-monitor | Container Insights Onboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md | You can let the onboarding experience create a Log Analytics workspace in the de ### Azure Monitor workspace (preview) -If you're going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus-metrics-addon.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), you must have an Azure Monitor workspace where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace. +If you're going to configure the cluster to [collect Prometheus metrics](container-insights-prometheus.md) with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md), you must have an Azure Monitor workspace where Prometheus metrics are stored. You can let the onboarding experience create an Azure Monitor workspace in the default resource group of the AKS cluster subscription or use an existing Azure Monitor workspace. ### Permissions |
azure-monitor | Container Insights Prometheus Monitoring Addon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-monitoring-addon.md | -This article describes how to send Prometheus metrics to a Log Analytics workspace with the Container insights monitoring addon. You can also send metrics to Azure Monitor managed service for Prometheus with the metrics addon which that supports standard Prometheus features such as PromQL and Prometheus alert rules. See [Send Kubernetes metrics to Azure Monitor managed service for Prometheus with Container insights](container-insights-prometheus-metrics-addon.md). +This article describes how to send Prometheus metrics to a Log Analytics workspace with the Container insights monitoring addon. You can also send metrics to Azure Monitor managed service for Prometheus with the metrics addon which that supports standard Prometheus features such as PromQL and Prometheus alert rules. See [Collect Prometheus metrics with Container insights](container-insights-prometheus.md). ## Prometheus scraping settings |
azure-monitor | Container Insights Prometheus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus.md | -Container insights can also scrape Prometheus metrics from your cluster for the cases described below. This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram. +Container insights can also scrape Prometheus metrics from your cluster and send the data to either Azure Monitor Logs or to Azure Monitor managed service for Prometheus. This requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring one of the addons for the Azure Monitor agent used by Container insights as shown the following diagram. -## Collect additional data -You may want to collect additional data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace. -See [Collect Prometheus metrics Logs with Container insights (preview)](container-insights-prometheus-monitoring-addon.md) to configure your cluster to collect additional Prometheus metrics with the monitoring addon. ## Send data to Azure Monitor managed service for Prometheus-Container insights currently stores the data that it collects in Azure Monitor Logs. [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This requires configuring the *metrics addon* for the Azure Monitor agent, which sends data to Prometheus. +[Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed Prometheus-compatible service that supports industry standard features such as PromQL, Grafana dashboards, and Prometheus alerts. This requires configuring the *metrics addon* for the Azure Monitor agent, which sends data to Prometheus. -See [Collect Prometheus metrics from Kubernetes cluster with Container insights](container-insights-prometheus-metrics-addon.md) to configure your cluster to send metrics to Azure Monitor managed service for Prometheus. +> [!TIP] +> You don't need to enable Container insights to configure your AKS cluster to send data to managed Prometheus. See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on how to configure your cluster without enabling Container insights. +Use the following procedure to add Promtheus collection to your cluster that's already using Container insights. ++1. Open the **Kubernetes services** menu in the Azure portal and select your AKS cluster. +2. Click **Insights**. +3. Click **Monitor settings**. ++ :::image type="content" source="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings.png" lightbox="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings.png" alt-text="Screenshot of button for monitor settings for an AKS cluster."::: ++4. Click the checkbox for **Enable Prometheus metrics** and select your Azure Monitor workspace. +5. To send the collected metrics to Grafana, select a Grafana workspace. See [Create an Azure Managed Grafana instance](../../managed-grafan) for details on creating a Grafana workspace. ++ :::image type="content" source="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings-details.png" lightbox="media/container-insights-prometheus-metrics-addon/aks-cluster-monitor-settings-details.png" alt-text="Screenshot of monitor settings for an AKS cluster."::: ++6. Click **Configure** to complete the configuration. ++See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations) ++## Send metrics to Azure Monitor Logs +You may want to collect additional data in addition to the predefined set of data collected by Container insights. This data isn't used by Container insights views but is available for log queries and alerts like the other data it collects. This requires configuring the *monitoring addon* for the Azure Monitor agent, which is the one currently used by Container insights to send data to a Log Analytics workspace. ++### Prometheus scraping settings ++Active scraping of metrics from Prometheus is performed from one of two perspectives: ++- **Cluster-wide**: Defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*. +- **Node-wide**: Defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*. ++| Endpoint | Scope | Example | +|-|-|| +| Pod annotation | Cluster-wide | `prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` | +| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï | +| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` | ++When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped. ++|Scope | Key | Data type | Value | Description | +||--|--|-|-| +| Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. | +| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) | +| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`| +| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` | +| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. | +| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. | +| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. | +| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. | +| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` | +| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) | +| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. | +| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. | ++### Configure ConfigMaps +Perform the following steps to configure your ConfigMap configuration file for your cluster. ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections. ++++1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as c*ontainer-azm-ms-agentconfig.yaml*. If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used. +1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics. +++ #### [Cluster-wide](#tab/cluster-wide) ++ To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example: ++ ``` + prometheus-data-collection-settings: |- ΓÇï + # Custom Prometheus metrics data collection settings + [prometheus_data_collection_settings.cluster] ΓÇï + interval = "1m" ## Valid time units are s, m, h. + fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï + fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting + kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"] + ``` ++ #### [Specific URL](#tab/url) ++ To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example: ++ ``` + prometheus-data-collection-settings: |- ΓÇï + # Custom Prometheus metrics data collection settings + [prometheus_data_collection_settings.cluster] ΓÇï + interval = "1m" ## Valid time units are s, m, h. + fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï + fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting + urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from + ``` ++ #### [DaemonSet](#tab/deamonset) ++ To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap: ++ ``` + prometheus-data-collection-settings: |- ΓÇï + # Custom Prometheus metrics data collection settings ΓÇï + [prometheus_data_collection_settings.node] ΓÇï + interval = "1m" ## Valid time units are s, m, h. + urls = ["http://$NODE_IP:9103/metrics"] ΓÇï + fieldpass = ["metric_to_pass1", "metric_to_pass2"] ΓÇï + fielddrop = ["metric_to_drop"] ΓÇï + ``` ++ `$NODE_IP` is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase. ++ #### [Pod annotation](#tab/pod) ++ To configure scraping of Prometheus metrics by specifying a pod annotation: ++ 1. In the ConfigMap, specify the following configuration: ++ ``` + prometheus-data-collection-settings: |- ΓÇï + # Custom Prometheus metrics data collection settings + [prometheus_data_collection_settings.cluster] ΓÇï + interval = "1m" ## Valid time units are s, m, h + monitor_kubernetes_pods = true + ``` ++ 1. Specify the following configuration for pod annotations: ++ ``` + - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï + - prometheus.io/scheme:"http" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ΓÇÿhttpΓÇÖΓÇï + - prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation. ΓÇï + - prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï + ``` + + If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`. ++2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`. + + Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`. ++The configuration change can take a few minutes to finish before taking effect. You must restart all Azure Monitor Agent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`. +++### Verify configuration ++To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n=kube-system`. +++If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example: ++``` +***************Start Config Processing******************** +config::unsupported/missing config schema version - 'v21' , using defaults +``` ++Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics: ++- From an agent pod logs using the same `kubectl logs` command. ++- From Live Data (preview). Live Data (preview) logs show errors similar to the following example: ++ ``` + 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host + ``` ++- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour. +- For Azure Red Hat OpenShift v3.x and v4.x, check the Azure Monitor Agent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled. ++Errors prevent Azure Monitor Agent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`. ++For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`. ++### Query Prometheus metrics data ++To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#prometheus-metrics). ++### View Prometheus metrics in Grafana ++Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards. + ## Next steps -- [Configure your cluster to send data to Azure Monitor managed service for Prometheus](container-insights-prometheus-metrics-addon.md).-- [Configure your cluster to send data to Azure Monitor Logs](container-insights-prometheus-metrics-addon.md).+- [See the default configuration for Prometheus metrics](../essentials/prometheus-metrics-scrape-default.md). +- [Customize Prometheus metric scraping for the cluster](../essentials/prometheus-metrics-scrape-configuration.md). |
azure-monitor | Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md | -Azure Monitor is based on a [common monitoring data platform](data-platform.md) that includes [Logs](logs/data-platform-logs.md) and [Metrics](essentials/data-platform-metrics.md). This platform allows data from multiple resources to be analyzed together using a common set of tools in Azure Monitor. Monitoring data may also be sent to other locations to support certain scenarios, and some resources may write to other locations before they can be collected into Logs or Metrics. ++Azure Monitor is based on a [common monitoring data platform](data-platform.md) that includes +- [Metrics](essentials/data-platform-metrics.md) +- [Logs](logs/data-platform-logs.md) +- Traces +- Changes. This platform allows data from multiple resources to be analyzed together using a common set of tools in Azure Monitor. Monitoring data may also be sent to other locations to support certain scenarios, and some resources may write to other locations before they can be collected into Logs or Metrics. This article describes common sources of monitoring data collected by Azure Monitor in addition to the monitoring data created by Azure resources. Links are provided to detailed information on configuration required to collect this data to different locations. Some of these data sources use the [new data ingestion pipeline](essentials/data-collection.md) in Azure Monitor. This article will be updated as other data sources transition to this new data collection method. > [!NOTE]-> Access to data in the Log Analytics Workspaces is governed as outline [here](https://learn.microsoft.com/azure/azure-monitor/logs/manage-access). +> Access to data in the Log Analytics Workspaces is governed as outline [here](logs/manage-access.md). > ## Application tiers The [Azure Activity log](essentials/platform-logs-overview.md) includes service | Destination | Description | Reference | |:|:|:|-| Activity log | The Activity log is collected into its own data store that you can view from the Azure Monitor menu or use to create Activity log alerts. | [Query the Activity log in the Azure portal](essentials/activity-log.md#view-the-activity-log) | +| Activity log | The Activity log is collected into its own data store that you can view from the Azure Monitor menu or use to create Activity log alerts. |[Query the Activity log with the Azure portal](essentials/activity-log.md#view-the-activity-log) | | Azure Monitor Logs | Configure Azure Monitor Logs to collect the Activity log to analyze it with other monitoring data. | [Collect and analyze Azure activity logs in Log Analytics workspace in Azure Monitor](essentials/activity-log.md) | | Azure Storage | Export the Activity log to Azure Storage for archiving. | [Archive Activity log](essentials/resource-logs.md#send-to-azure-storage) | | Event Hubs | Stream the Activity log to other locations using Event Hubs | [Stream Activity log to Event Hubs](essentials/resource-logs.md#send-to-azure-event-hubs). | Compute resources in Azure, in other clouds, and on-premises have a guest operat | Azure Monitor Logs | The Log Analytics agent connects to Azure Monitor either directly or through System Center Operations Manager and allows you to collect data from data sources that you configure or from monitoring solutions that provide additional insights into applications running on the virtual machine. | [Agent data sources in Azure Monitor](agents/agent-data-sources.md)<br>[Connect Operations Manager to Azure Monitor](agents/om-agents.md) | ### Azure diagnostic extension-Enabling the Azure diagnostics extension for Azure Virtual machines allows you to collect logs and metrics from the guest operating system of Azure compute resources including Azure Cloud Service (classic) Web and Worker Roles, Virtual Machines, virtual machine scale sets, and Service Fabric. +Enabling the Azure diagnostics extension for Azure Virtual machines allows you to collect logs and metrics from the guest operating system of Azure compute resources including Azure Cloud Service (classic) Web and Worker Roles, Virtual Machines, Virtual Machine Scale Sets, and Service Fabric. | Destination | Description | Reference | |:|:|:| Enabling the Azure diagnostics extension for Azure Virtual machines allows you t ## Application Code-Detailed application monitoring in Azure Monitor is done with [Application Insights](/azure/application-insights/) which collects data from applications running on a variety of platforms. The application can be running in Azure, another cloud, or on-premises. +Detailed application monitoring in Azure Monitor is done with [Application Insights](/azure/application-insights/), which collects data from applications running on various platforms. The application can be running in Azure, another cloud, or on-premises. :::image type="content" source="media/data-sources/applications.png" lightbox="media/data-sources/applications.png" alt-text="Diagram that shows application data collection." border="false"::: In addition to the standard tiers of an application, you may need to monitor oth ## Other services-Other services in Azure write data to the Azure Monitor data platform. This allows you to analyze data collected by these services with data collected by Azure Monitor and leverage the same analysis and visualization tools. +Other services in Azure write data to the Azure Monitor data platform. This allows you to analyze data collected by these services with data collected by Azure Monitor and apply the same analysis and visualization tools. | Service | Destination | Description | Reference | |:|:|:|:|-| [Microsoft Defender for Cloud](../security-center/index.yml) | Azure Monitor Logs | Microsoft Defender for Cloud stores the security data it collects in a Log Analytics workspace which allows it to be analyzed with other log data collected by Azure Monitor. | [Data collection in Microsoft Defender for Cloud](../security-center/security-center-enable-data-collection.md) | -| [Microsoft Sentinel](../sentinel/index.yml) | Azure Monitor Logs | Microsoft Sentinel stores the data it collects from different data sources in a Log Analytics workspace which allows it to be analyzed with other log data collected by Azure Monitor. | [Connect data sources](../sentinel/quickstart-onboard.md) | +| [Microsoft Defender for Cloud](../security-center/index.yml) | Azure Monitor Logs | Microsoft Defender for Cloud stores the security data it collects in a Log Analytics workspace, which allows it to be analyzed with other log data collected by Azure Monitor. | [Data collection in Microsoft Defender for Cloud](../security-center/security-center-enable-data-collection.md) | +| [Microsoft Sentinel](../sentinel/index.yml) | Azure Monitor Logs | Microsoft Sentinel stores the data it collects from different data sources in a Log Analytics workspace, which allows it to be analyzed with other log data collected by Azure Monitor. | [Connect data sources](../sentinel/quickstart-onboard.md) | ## Next steps |
azure-monitor | Monitor Azure Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md | -# Tutorial: Monitor Azure resources with Azure Monitor +# Monitor Azure resources with Azure Monitor When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. Azure Monitor is a full-stack monitoring service that provides a complete set of features to monitor your Azure resources. You can also use Azure Monitor to monitor resources in other clouds and on-premises. -In this tutorial, you learn about: +In this article, you learn about: > [!div class="checklist"] > * Azure Monitor and how it's integrated into the portal for other Azure services. In this tutorial, you learn about: > * Azure Monitor tools that are used to collect and analyze data. > [!NOTE]-> This tutorial describes Azure Monitor concepts and walks you through different menu items. To jump right into using Azure Monitor features, start with [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md). +> This article describes Azure Monitor concepts and walks you through different menu items. To jump right into using Azure Monitor features, start with [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md). ## Monitoring data You can access Azure Monitor features from the **Monitor** menu in the Azure por The **Overview** page includes details about the resource and often its current state. For example, a virtual machine shows its current running state. Many Azure services have a **Monitoring** tab that includes charts for a set of key metrics. Charts are a quick way to view the operation of the resource. You can select any of the charts to open them in Metrics Explorer for more detailed analysis. -For a tutorial on using Metrics Explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md). +To learn how to use Metrics Explorer, see [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).  The **Activity log** menu item lets you view entries in the [activity log](../es The **Alerts** page shows you any recent alerts that were fired for the resource. Alerts proactively notify you when important conditions are found in your monitoring data and can use data from either Metrics or Logs. -For tutorials on how to create alert rules and view alerts, see [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md). +To learn how to create alert rules and view alerts, see [Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md). :::image type="content" source="media/monitor-azure-resource/alerts-view.png" lightbox="media/monitor-azure-resource/alerts-view.png" alt-text="Screenshot that shows the Alerts page."::: For tutorials on how to create alert rules and view alerts, see [Tutorial: Creat The **Metrics** menu item opens [Metrics Explorer](./metrics-getting-started.md). You can use it to work with individual metrics or combine multiple metrics to identify correlations and trends. This is the same Metrics Explorer that opens when you select one of the charts on the **Overview** page. -For a tutorial on how to use Metrics Explorer, see [Tutorial: Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md). +To learn how to use Metrics Explorer, see [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md). :::image type="content" source="media/monitor-azure-resource/metrics.png" lightbox="media/monitor-azure-resource/metrics.png" alt-text="Screenshot that shows Metrics Explorer."::: For a tutorial on how to use Metrics Explorer, see [Tutorial: Analyze metrics fo The **Diagnostic settings** page lets you create a [diagnostic setting](../essentials/diagnostic-settings.md) to collect the resource logs for your resource. You can send them to multiple locations, but the most common use is to send them to a Log Analytics workspace so you can analyze them with Log Analytics. -For a tutorial on how to create a diagnostic setting, see [Tutorial: Collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md). +To learn how to create a diagnostic setting, see [Collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md). :::image type="content" source="media/monitor-azure-resource/diagnostic-settings.png" lightbox="media/monitor-azure-resource/diagnostic-settings.png" alt-text="Screenshot that shows the Diagnostic settings page."::: For a tutorial on how to create a diagnostic setting, see [Tutorial: Collect and The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../monitor-reference.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features. -For a list of insights that are available and links to their documentation, see [Insights and core solutions](../monitor-reference.md#insights-and-curated-visualizations). +For a list of insights that are available and links to their documentation, see [Insights](../insights/insights-overview.md) and [core solutions](../insights/solutions.md). :::image type="content" source="media/monitor-azure-resource/insights.png" lightbox="media/monitor-azure-resource/insights.png" alt-text="Screenshot that shows the Insights page."::: |
azure-monitor | Prometheus Grafana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md | Versions 9.x and greater of Grafana support Azure Authentication, but it's not e ## Next steps -- [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md).+- [Collect Prometheus metrics for your AKS cluster](../essentials/prometheus-metrics-enable.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).-- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md). |
azure-monitor | Prometheus Metrics Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md | + + Title: Enable Azure Monitor managed service for Prometheus (preview) +description: Enable Azure Monitor managed service for Prometheus (preview) and configure data collection from your Azure Kubernetes Service (AKS) cluster. ++ Last updated : 09/28/2022++++# Collect Prometheus metrics from AKS cluster (preview) +This article describes how to configure your Azure Kubernetes Service (AKS) cluster to send data to Azure Monitor managed service for Prometheus. When you configure your AKS cluster to send data to Azure Monitor managed service for Prometheus, a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) is installed with a metrics extension. You just need to specify the Azure Monitor workspace that the data should be sent to. ++> [!NOTE] +> The process described here doesn't enable [Container insights](../containers/container-insights-overview.md) on the cluster even though the Azure Monitor agent installed in this process is the same one used by Container insights. See [Enable Container insights](../containers/container-insights-onboard.md) for different methods to enable Container insights on your cluster. See [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md) for details on adding Prometheus collection to a cluster that already has Container insights enabled. ++## Prerequisites ++- You must either have an [Azure Monitor workspace](azure-monitor-workspace-overview.md) or [create a new one](azure-monitor-workspace-overview.md#create-an-azure-monitor-workspace). +- The cluster must use [managed identity authentication](../containers/container-insights-enable-aks.md#migrate-to-managed-identity-authentication). +- The following resource providers must be registered in the subscription of the AKS cluster and the Azure Monitor Workspace. + - Microsoft.ContainerService + - Microsoft.Insights + - Microsoft.AlertsManagement ++## Enable Prometheus metric collection +Use any of the following methods to install the Azure Monitor agent on your AKS cluster and send Prometheus metrics to your Azure Monitor workspace. ++### [Azure portal](#tab/azure-portal) ++1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your cluster. +2. Select **Managed Prometheus** to display a list of AKS clusters. +3. Click **Configure** next to the cluster you want to enable. ++ :::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot of Azure Monitor workspace with Prometheus configuration."::: +++### [CLI](#tab/cli) ++#### Prerequisites ++- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. +- The aks-preview extension needs to be installed using the command `az extension add --name aks-preview`. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/azure/azure-cli-extensions-overview). +- Azure CLI version 2.41.0 or higher is required for this feature. ++#### Install metrics addon ++Use `az aks update` with the `-enable-azuremonitormetrics` option to install the metrics addon. Following are multiple options depending on the Azure Monitor workspace and Grafana workspace you want to use. +++**Create a new default Azure Monitor workspace.**<br> +If no Azure Monitor Workspace is specified, then a default Azure Monitor Workspace will be created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`. +This Azure Monitor Workspace will be in the region specific in [Region mappings](#region-mappings). ++```azurecli +az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> +``` ++**Use an existing Azure Monitor workspace.**<br> +If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data will be available in Grafana. ++```azurecli +az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id> +``` ++**Use an existing Azure Monitor workspace and link with an existing Grafana workspace.**<br> +This creates a link between the Azure Monitor workspace and the Grafana workspace. ++```azurecli +az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id> +``` ++The output for each command will look similar to the following: ++```json +"azureMonitorProfile": { + "metrics": { + "enabled": true, + "kubeStateMetrics": { + "metrican'tationsAllowList": "", + "metricLabelsAllowlist": "" + } + } +} +``` ++#### Optional parameters +Following are optional parameters that you can use with the previous commands. ++- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications. +- `--ksm-metric-labels-allow-list` is a comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional labels provide a list of resource names in their plural form and Kubernetes label keys you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications. ++**Use annotations and labels.** ++```azurecli +az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]" +``` ++The output will be similar to the following: ++```json + "azureMonitorProfile": { + "metrics": { + "enabled": true, + "kubeStateMetrics": { + "metrican'tationsAllowList": "pods=[k8s-annotation-1,k8s-annotation-n]", + "metricLabelsAllowlist": "namespaces=[k8s-label-1,k8s-label-n]" + } + } + } +``` ++## [Resource Manager](#tab/resource-manager) ++### Prerequisites ++- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. +- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created. +- The template needs to be deployed in the same resource group as the Azure Managed Grafana workspace. +++### Retrieve required values for Grafana resource +From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**. ++ Copy the value of the `principalId` field for the `SystemAssigned` identity. ++```json +"identity": { + "principalId": "00000000-0000-0000-0000-000000000000", + "tenantId": "00000000-0000-0000-0000-000000000000", + "type": "SystemAssigned" + }, +``` ++If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace. ++```json +"properties": { + "grafanaIntegrations": { + "azureMonitorWorkspaceIntegrations": [ + { + "azureMonitorWorkspaceResourceId": "full_resource_id_1" + }, + { + "azureMonitorWorkspaceResourceId": "full_resource_id_2" + } + ] + } +} +``` ++### Assign role to system identity +The Azure Managed Grafana resource requires the `Monitoring Data Reader` role to read data from the Azure Monitor Workspace. ++1. From the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** and then **Add role assignment**. +2. Select `Monitoring Data Reader`. +3. Select **Managed identity** and then **Select members**. +4. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource. +5. Click **Select** and then **Review+assign**. ++### Download and edit template and parameter file ++1. Download the template at [https://aka.ms/azureprometheus-enable-arm-template](https://aka.ms/azureprometheus-enable-arm-template) and save it as **existingClusterOnboarding.json**. +2. Download the parameter file at [https://aka.ms/azureprometheus-enable-arm-template-parameterss](https://aka.ms/azureprometheus-enable-arm-template-parameters) and save it as **existingClusterParam.json**. +3. Edit the values in the parameter file. ++ | Parameter | Value | + |:|:| + | `azureMonitorWorkspaceResourceId` | Resource ID for the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. | + | `azureMonitorWorkspaceLocation` | Location of the Azure Monitor workspace. Retrieve from the **JSON view** on the **Overview** page for the Azure Monitor workspace. | + | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | + | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | + | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. | + | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. | + | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | + | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | + | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. | +++4. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will be similar to the following: ++ ```json + { + "type": "Microsoft.Dashboard/grafana", + "apiVersion": "2022-08-01", + "name": "[split(parameters('grafanaResourceId'),'/')[8]]", + "sku": { + "name": "[parameters('grafanaSku')]" + }, + "location": "[parameters('grafanaLocation')]", + "properties": { + "grafanaIntegrations": { + "azureMonitorWorkspaceIntegrations": [ + { + "azureMonitorWorkspaceResourceId": "full_resource_id_1" + }, + { + "azureMonitorWorkspaceResourceId": "full_resource_id_2" + }, + { + "azureMonitorWorkspaceResourceId": "[parameters('azureMonitorWorkspaceResourceId')]" + } + ] + } + } + ```` ++In this json, `full_resource_id_1` and `full_resource_id_2` were already in the Azure Managed Grafana resource JSON, and they're added here to the ARM template. If you have no existing Grafana integrations, then don't include these entries for `full_resource_id_1` and `full_resource_id_2`. ++The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file. +++### Deploy template ++Deploy the template with the parameter file using any valid method for deploying Resource Manager templates. See [Deploy the sample templates](../resource-manager-samples.md#deploy-the-sample-templates) for examples of different methods. +++++++## Verify Deployment ++Run the following command to which verify that the daemon set was deployed properly: ++``` +kubectl get ds ama-metrics-node --namespace=kube-system +``` ++The output should resemble the following: ++``` +User@aksuser:~$ kubectl get ds ama-metrics-node --namespace=kube-system +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +ama-metrics-node 1 1 1 1 1 <none> 10h +``` ++Run the following command to which verify that the replica set was deployed properly: ++``` +kubectl get rs --namespace=kube-system +``` ++The output should resemble the following: ++``` +User@aksuser:~$kubectl get rs --namespace=kube-system +NAME DESIRED CURRENT READY AGE +ama-metrics-5c974985b8 1 1 1 11h +ama-metrics-ksm-5fcf8dffcd 1 1 1 11h +``` +++## Limitations ++- Ensure that you update the `kube-state metrics` Annotations and Labels list with proper formatting. There's a limitation in the Resource Manager template deployments that require exact values in the `kube-state` metrics pods. If the kuberenetes pod has any issues with malformed parameters and isn't running, then the feature won't work as expected. +- A data collection rule and data collection endpoint is created with the name `MSPROM-\<cluster-name\>-\<cluster-region\>`. These names can't currently be modified. +- You must get the existing Azure Monitor workspace integrations for a Grafana workspace and update the Resource Manager template with it, otherwise it will overwrite and remove the existing integrations from the grafana workspace. +- CPU and Memory requests and limits can't be changed for Container insights metrics addon. If changed, they'll be reconciled and replaced by original values in a few seconds. +- Metrics addon doesn't work on AKS clusters configured with HTTP proxy. +++## Uninstall metrics addon +Currently, Azure CLI is the only option to remove the metrics addon and stop sending Prometheus metrics to Azure Monitor managed service for Prometheus. ++If you don't already have it, install the aks-preview extension with the following command. ++The `aks-preview` extension needs to be installed using the following command. For more information on how to install a CLI extension, see [Use and manage extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview). ++```azurecli +az extension add --name aks-preview +``` +Use the following command to remove the agent from the cluster nodes and delete the recording rules created for the data being collected from the cluster. This doesn't remove the DCE, DCR, or the data already collected and stored in your Azure Monitor workspace. ++```azurecli +az aks update --disable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> +``` ++## Region mappings +When you allow a default Azure Monitor workspace to be created when you install the metrics addon, it's created in the region listed in the following table. ++| AKS Cluster region | Azure Monitor workspace region | +|--|| +|australiacentral |eastus| +|australiacentral2 |eastus| +|australiaeast |eastus| +|australiasoutheast |eastus| +|brazilsouth |eastus| +|canadacentral |eastus| +|canadaeast |eastus| +|centralus |centralus| +|centralindia |centralindia| +|eastasia |westeurope| +|eastus |eastus| +|eastus2 |eastus2| +|francecentral |westeurope| +|francesouth |westeurope| +|japaneast |eastus| +|japanwest |eastus| +|koreacentral |westeurope| +|koreasouth |westeurope| +|northcentralus |eastus| +|northeurope |westeurope| +|southafricanorth |westeurope| +|southafricawest |westeurope| +|southcentralus |eastus| +|southeastasia |westeurope| +|southindia |centralindia| +|uksouth |westeurope| +|ukwest |westeurope| +|westcentralus |eastus| +|westeurope |westeurope| +|westindia |centralindia| +|westus |westus| +|westus2 |westus2| +|westus3 |westus| +|norwayeast |westeurope| +|norwaywest |westeurope| +|switzerlandnorth |westeurope| +|switzerlandwest |westeurope| +|uaenorth |westeurope| +|germanywestcentral |westeurope| +|germanynorth |westeurope| +|uaecentral |westeurope| +|eastus2euap |eastus2euap| +|centraluseuap |westeurope| +|brazilsoutheast |eastus| +|jioindiacentral |centralindia| +|swedencentral |westeurope| +|swedensouth |westeurope| +|qatarcentral |westeurope| ++## Next steps ++- [See the default configuration for Prometheus metrics](prometheus-metrics-scrape-default.md). +- [Customize Prometheus metric scraping for the cluster](prometheus-metrics-scrape-configuration.md). |
azure-monitor | Prometheus Metrics Multiple Workspaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-multiple-workspaces.md | Routing metrics to more Azure Monitor Workspaces can be done through the creatio ## Send same metrics to multiple Azure Monitor workspaces -You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor Workspaces from the same Kubernetes cluster. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](../containers/container-insights-prometheus-metrics-addon.md#enable-prometheus-metric-collection) and then edit the same Resource Manager templates to add additional DCRs for your additional Azure Monitor Workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, and add an additional Azure Monitor workspace integration for Grafana. +You can create multiple Data Collection Rules that point to the same Data Collection Endpoint for metrics to be sent to additional Azure Monitor Workspaces from the same Kubernetes cluster. Currently, this is only available through onboarding through Resource Manager templates. You can follow the [regular onboarding process](prometheus-metrics-enable.md) and then edit the same Resource Manager templates to add additional DCRs for your additional Azure Monitor Workspaces. You'll need to edit the template to add an additional parameters for every additional Azure Monitor workspace, add another DCR for every additional Azure Monitor workspace, and add an additional Azure Monitor workspace integration for Grafana. - Add the following parameters: ```json scrape_configs: ## Next steps - [Learn more about Azure Monitor managed service for Prometheus](prometheus-metrics-overview.md).-- [Collect Prometheus metrics from AKS cluster](../containers/container-insights-prometheus-metrics-addon.md).+- [Collect Prometheus metrics from AKS cluster](prometheus-metrics-enable.md). |
azure-monitor | Prometheus Metrics Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md | Azure Monitor managed service for Prometheus is a component of [Azure Monitor Me ## Data sources Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources. -- Azure Kubernetes service (AKS). [Configure the Azure Monitor managed service for Prometheus AKS add-on](../containers/container-insights-prometheus-metrics-addon.md) to scrape metrics from an AKS cluster.-- Any Kubernetes cluster running self-managed Prometheus using [remote-write](https://aka.ms/azureprometheus-promio-prw). In this configuration, metrics are collected by a local Prometheus server for each cluster and then consolidated in Azure Monitor managed service for Prometheus.+- Azure Kubernetes service (AKS) +- Any Kubernetes cluster running self-managed Prometheus using [remote-write](https://aka.ms/azureprometheus-promio-prw). +## Enable +The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics. ++- To collect Prometheus metrics from your AKS cluster without using Container insights, see [Collect Prometheus metrics from AKS cluster (preview)](prometheus-metrics-enable.md). +- To add collection of Prometheus metrics to your cluster using Container insights, see [Collect Prometheus metrics with Container insights](../containers/container-insights-prometheus.md#send-data-to-azure-monitor-managed-service-for-prometheus). +- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write - managed identity (preview)](prometheus-remote-write-managed-identity.md). ## Grafana integration The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards. The primary method for visualizing Prometheus metrics is [Azure Managed Grafana] ## Alerts Azure Monitor managed service for Prometheus adds a new Prometheus alert type for creating alerts using PromQL queries. You can view fired and resolved Prometheus alerts in the Azure portal along with other alert types. Prometheus alerts are configured with the same [alert rules](https://aka.ms/azureprometheus-promio-alertrules) used by Prometheus. For your AKS cluster, you can use a [set of predefined Prometheus alert rules] -## Enable -The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics such as Container insights for your AKS cluster as described in [Send Kubernetes metrics to Azure Monitor managed service for Prometheus with Container insights](../containers/container-insights-prometheus-metrics-addon.md). - ## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus. Following are links to Prometheus documentation. ## Next steps +- [Enable Azure Monitor managed service for Prometheus](prometheus-metrics-enable.md). - [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md). - [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md). |
azure-monitor | Prometheus Metrics Scrape Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-configuration.md | -This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](../containers/container-insights-prometheus-metrics-addon.md) in Azure Monitor. +This article provides instructions on customizing metrics scraping for a Kubernetes cluster with the [metrics addon](prometheus-metrics-enable.md) in Azure Monitor. ## Configmaps |
azure-monitor | Prometheus Metrics Scrape Default | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-scrape-default.md | -This article lists the default targets, dashboards, and recording rules when you [configure Container insights to collect Prometheus metrics by enabling metrics-addon](../containers/container-insights-prometheus-metrics-addon.md) for any AKS cluster. +This article lists the default targets, dashboards, and recording rules when you [configure Prometheus metrics to be scraped from an AKS cluster](prometheus-metrics-enable.md) for any AKS cluster. ## Scrape frequency |
azure-monitor | Tutorial Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-metrics.md | Title: Tutorial - Analyze metrics for an Azure resource + Title: Analyze metrics for an Azure resource description: Learn how to analyze metrics for an Azure resource using metrics explorer in Azure Monitor Last updated 11/08/2021 -# Tutorial: Analyze metrics for an Azure resource +# Analyze metrics for an Azure resource Metrics are numerical values that are automatically collected at regular intervals and describe some aspect of a resource. For example, a metric might tell you the processor utilization of a virtual machine, the free space in a storage account, or the incoming traffic for a virtual network. Metrics explorer is a feature of Azure Monitor in the Azure portal that allows you to create charts from metric values, visually correlate trends, and investigate spikes and dips in metric values. Use the metrics explorer to plot charts from metrics created by your Azure resources and investigate their health and utilization. In this tutorial, you learn how to: In this tutorial, you learn how to: > * Modify the time range and granularity for the chart -Following is a video that shows a more extensive scenario than the procedure outlined in this article. If you are new to metrics, we suggest you read through this article first and then view the video to see more specifics. +Following is a video that shows a more extensive scenario than the procedure outlined in this tutorial. If you are new to metrics, we suggest you read through this article first and then view the video to see more specifics. > [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4qO59] ## Prerequisites-To complete this tutorial you need the following: +To complete the steps in this tutorial, you need the following: - An Azure resource to monitor. You can use any resource in your Azure subscription that supports metrics. To determine whether a resource supports metrics, go to its menu in the Azure portal and verify that there's a **Metrics** option in the **Monitoring** section of the menu. |
azure-monitor | Tutorial Resource Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md | Title: Tutorial - Collect resource logs from an Azure resource -description: Tutorial to configure diagnostic settings to send resource logs from an Azure resource io a Log Analytics workspace where they can be analyzed with a log query. -+ Title: Collect resource logs from an Azure resource +description: Learn how to configure diagnostic settings to send resource logs from an Azure resource io a Log Analytics workspace where they can be analyzed with a log query. + Last updated 11/08/2021 -# Tutorial: Collect and analyze resource logs from an Azure resource +# Collect and analyze resource logs from an Azure resource Resource logs provide insight into the detailed operation of an Azure resource and are useful for monitoring their health and availability. Azure resources generate resource logs automatically, but you must create a diagnostic setting to collect them. This tutorial takes you through the process of creating a diagnostic setting to send resource logs to a Log Analytics workspace where you can analyze them with log queries. In this tutorial, you learn how to: In this tutorial, you learn how to: ## Prerequisites -To complete this tutorial you need the following: +To complete the steps in this tutorial, you need the following: - An Azure resource to monitor. You can use any resource in your Azure subscription that supports diagnostic settings. To determine whether a resource supports diagnostic settings, go to its menu in the Azure portal and verify that there's a **Diagnostic settings** option in the **Monitoring** section of the menu. |
azure-monitor | Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/insights-overview.md | + + Title: Azure Monitor Insights Overview +description: Lists available Azure Monitor "Insights" and other Azure product integrations +++ Last updated : 10/15/2022+++# Azure Monitor Insights overview ++Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as *curated visualizations* with the larger more complex of them being called *Insights*. ++The experiences collect and analyze a subset of available telemetry for a given service(s). Depending on the service, the experiences might also provide out-of-the-box alerting. They present the telemetry in a visual layout. The visualizations vary in size and scale. ++Some visualizations are considered part of Azure Monitor and follow the support and service level agreements for Azure. They're supported in all Azure regions where Azure Monitor is available. Other curated visualizations provide less functionality, might not scale, and might have different agreements. Some might be based solely on Azure Monitor Workbooks, while others might have an extensive custom experience. ++## Insights and curated visualizations ++The following table lists the available curated visualizations and information about them. **Most** of the list below can be found in the [Insights hub in the Azure portal](https://ms.portal.azure.com/#view/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/~/more). The table uses the same grouping as portal. ++>[!NOTE] +> Another type of older visualization called *monitoring solutions* is no longer in active development. The replacement technology is the Azure Monitor Insights, as mentioned here. We suggest you use the Insights and not deploy new instances of solutions. For more information on the solutions, see [Monitoring solutions in Azure Monitor](solutions.md). ++|Name with docs link| State | [Azure portal link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description | +|:--|:--|:--|:--| +|Compute|||| + | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure VMs and Virtual Machine Scale Sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. | +| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. | +|Networking|||| + | [Azure Network Insights](../../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by searching for your website name. | +|Storage|||| + | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | +| [Azure Backup](../../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. | +|Databases|||| +| [Azure Cosmos DB Insights](../../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | +| [Azure Monitor for Azure Cache for Redis (preview)](../../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. | +| [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. | + | [Azure Monitor Log Analytics Workspace](../logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). | +|Security|||| + | [Azure Key Vault Insights (preview)](../../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. | +|Monitor|||| + | [Azure Monitor Application Insights](../app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. | +| [Azure activity Log Insights](../essentials/activity-log-insights.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. | + | [Azure Monitor for Resource Groups](resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. | +|Integration|||| + | [Azure Service Bus Insights](../../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. | + [Azure IoT Edge](../../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. | +|Workloads|||| +| [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. | +| [Azure Monitor for SAP solutions](../../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | Preview | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. | +|Other|||| + | [Azure Virtual Desktop Insights](../../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. | + | [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. | +|Not in Azure portal Insight hub|||| +| [Azure Monitor Workbooks for Azure Active Directory](../../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | General availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. | +| [Azure HDInsight](../../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status.| +|Analytics|||| ++## Next steps ++- Reference some of the insights listed above to review their functionality +- Understand [what Azure Monitor can monitor](../monitor-reference.md) |
azure-monitor | Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/solutions.md | -> Many monitoring solutions are no longer in active development. We suggest you check each solution to see if it has a replacement. We suggest you not deploy new instances of solutions that have other options, even if those solutions are still available. Many have been replaced by a [newer curated visualization or insight](../monitor-reference.md#insights-and-curated-visualizations). +> Many monitoring solutions are no longer in active development. We suggest you check each solution to see if it has a replacement. We suggest you not deploy new instances of solutions that have other options, even if those solutions are still available. Many have been replaced by a [newer curated visualization or insight](insights-overview.md). Monitoring solutions in Azure Monitor provide analysis of the operation of an Azure application or service. This article gives a brief overview of monitoring solutions in Azure and details on using and installing them. |
azure-monitor | Azure Data Explorer Monitor Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-data-explorer-monitor-proxy.md | - Title: Query data in Azure Monitor using Azure Data Explorer -description: Use Azure Data Explorer to perform cross product queries between Azure Data Explorer, Log Analytics workspaces and classic Application Insights applications in Azure Monitor. --- Previously updated : 03/28/2022-----# Query data in Azure Monitor using Azure Data Explorer --The Azure Data Explorer supports cross service queries between Azure Data Explorer, [Application Insights (AI)](../app/app-insights-overview.md), and [Log Analytics (LA)](./data-platform-logs.md). You can then query your Log Analytics/Application Insights workspace using Azure Data Explorer tools and refer to it in a cross service query. The article shows how to make a cross service query and how to add the Log Analytics/Application Insights workspace to Azure Data Explorer Web UI. --The Azure Data Explorer cross service queries flow: --## Add a Log Analytics/Application Insights workspace to Azure Data Explorer client tools --1. Verify your Azure Data Explorer native cluster (such as *help* cluster) appears on the left menu before you connect to your Log Analytics or Application Insights cluster. --- In the Azure Data Explorer UI (https://dataexplorer.azure.com/clusters), select **Add Cluster**. --2. In the **Add Cluster** window, add the URL of the LA or AI cluster. -- * For LA: `https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>` - * For AI: `https://adx.monitor.azure.com//subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>` -- * Select **Add**. -- ->[!NOTE] ->* There are different endpoints for the following: ->* Azure Government- `adx.monitor.azure.us/` ->* Azure China- `adx.monitor.azure.cn/` ->* If you add a connection to more than one Log Analytics/Application insights workspace, give each a different name. Otherwise they'll all have the same name in the left pane. -- After the connection is established, your Log Analytics or Application Insights workspace will appear in the left pane with your native Azure Data Explorer cluster. -- -> [!NOTE] -> The number of Azure Monitor workspaces that can be mapped is limited to 100. --## Create queries using Azure Monitor data --You can run the queries using client tools that support Kusto queries, such as: Kusto Explorer, Azure Data Explorer Web UI, Jupyter Kqlmagic, Flow, PowerQuery, PowerShell, Lens, REST API. --> [!NOTE] -> The cross service query ability is used for data retrieval only. For more information, see [Function supportability](#function-supportability). --> [!TIP] -> * Database name should have the same name as the resource specified in the cross service query. Names are case sensitive. -> * In cross cluster queries, make sure that the naming of Application Insights apps and Log Analytics workspaces is correct. -> * If names contain special characters, they are replaced by URL encoding in the cross service query. -> * If names include characters that don't meet [KQL identifier name rules](/azure/data-explorer/kusto/query/schema-entities/entity-names), they are replaced by the dash **-** character. --### Direct query on your Log Analytics or Application Insights workspaces from Azure Data Explorer client tools --Run queries on your Log Analytics or Application Insights workspaces. Verify that your workspace is selected in the left pane. - -```kusto -Perf | take 10 // Demonstrate cross service query on the Log Analytics workspace -``` ---### Cross query of your Log Analytics or Application Insights and the Azure Data Explorer native cluster --When you run cross cluster service queries, verify your Azure Data Explorer native cluster is selected in the left pane. The following examples demonstrate combining Azure Data Explorer cluster tables [using union](/azure/data-explorer/kusto/query/unionoperator) with Log Analytics workspace. --```kusto -union StormEvents, cluster('https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>').database('<workspace-name>').Perf -| take 10 -``` --```kusto -let CL1 = 'https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>'; -union <Azure Data Explorer table>, cluster(CL1).database(<workspace-name>).<table name> -``` --->[!TIP] ->* Using the [`join` operator](/azure/data-explorer/kusto/query/joinoperator), instead of union, may require a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to run it on an Azure Data Explorer native cluster. --### Join data from an Azure Data Explorer cluster in one tenant with an Azure Monitor resource in another --Cross-tenant queries between the services are not supported. You are signed in to a single tenant for running the query spanning both resources. --If the Azure Data Explorer resource is in Tenant 'A' and Log Analytics workspace is in Tenant 'B' use one of the following two methods: --1. Azure Data Explorer allows you to add roles for principals in different tenants. Add your user ID in Tenant 'B' as an authorized user on the Azure Data Explorer cluster. Validate the *['TrustedExternalTenant'](/powershell/module/az.kusto/update-azkustocluster)* property on the Azure Data Explorer cluster contains Tenant 'B'. Run the cross-query fully in Tenant 'B'. --2. Use [Lighthouse](../../lighthouse/index.yml) to project the Azure Monitor resource into Tenant 'A'. -### Connect to Azure Data Explorer clusters from different tenants --Kusto Explorer automatically signs you into the tenant to which the user account originally belongs. To access resources in other tenants with the same user account, the `tenantId` has to be explicitly specified in the connection string: -`Data Source=https://adx.monitor.azure.com/subscriptions/SubscriptionId/resourcegroups/ResourceGroupName;Initial Catalog=NetDefaultDB;AAD Federated Security=True;Authority ID=`**TenantId** --## Function supportability --The Azure Data Explorer cross service queries support functions for both Application Insights and Log Analytics. -This capability enables cross-cluster queries to reference an Azure Monitor tabular function directly. -The following commands are supported with the cross service query: --* `.show functions` -* `.show function {FunctionName}` -* `.show database {DatabaseName} schema as json` --The following image depicts an example of querying a tabular function from the Azure Data Explorer Web UI. -To use the function, run the name in the Query window. ---## Additional syntax examples --The following syntax options are available when calling the Log Analytics or Application Insights clusters: --|Syntax Description |Application Insights |Log Analytics | -|-||| -| Database within a cluster that contains only the defined resource in this subscription (**recommended for cross cluster queries**) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>').database('<ai-app-name>`) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>').database('<workspace-name>`) | -| Cluster that contains all apps/workspaces in this subscription | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>`) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>`) | -|Cluster that contains all apps/workspaces in the subscription and are members of this resource group | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>`) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>`) | -|Cluster that contains only the defined resource in this subscription | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>`) | cluster(`https://adx.monitor.azure.com/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>`) | -|For Endpoints in the UsGov | cluster(`https://adx.monitor.azure.us/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>`)| - |For Endpoints in the China 21Vianet | cluster(`https://adx.monitor.azure.us/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>`) | --## Next steps --- Read more about the [data structure of Log Analytics workspaces and Application Insights](data-platform-logs.md).-- Learn to [write queries in Azure Data Explorer](/azure/data-explorer/write-queries).-- |
azure-monitor | Data Platform Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md | The following table describes some of the ways that you can use Azure Monitor Lo | **Analyze** | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data by using a powerful analysis engine. | | **Alert** | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | | **Visualize** | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.|-| **Get insights** | Logs support [insights](../monitor-reference.md#insights-and-curated-visualizations) that provide a customized monitoring experience for particular applications and services. | +| **Get insights** | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. | | **Retrieve** | Access log query results from a:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](https://dev.loganalytics.io/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> | | **Export** | Configure [automated export of log data](./logs-data-export.md) to an Azure storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). | |
azure-monitor | Monitor Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md | -## Insights and curated visualizations +Azure Monitor data is collected and stored based on resource provider namespaces. Each resource in Azure has a unique ID. The resource provider namespace is part of all unique IDs. For example, a key vault resource ID would be similar to `/subscriptions/d03b04c7-d1d4-eeee-aaaa-87b6fcb38b38/resourceGroups/KeyVaults/providers/Microsoft.KeyVault/vaults/mysafekeys ` . *Microsoft.KeyVault* is the resource provider namespace. *Microsoft.KeyVault/vaults/* is the resource provider. -Some services have a curated monitoring experience. That is, Microsoft provides customized functionality meant to act as a starting point for monitoring those services. These experiences are collectively known as *curated visualizations* with the larger more complex of them being called *Insights*. +For a list of Azure resource provider namespaces, see [Resource providers for Azure services](/azure/azure-resource-manager/management/azure-services-resource-providers). -The experiences collect and analyze a subset of logs and metrics. Depending on the service, they might also provide out-of-the-box alerting. They present this telemetry in a visual layout. The visualizations vary in size and scale. +For a list of resource providers that support Azure Monitor -Some visualizations are considered part of Azure Monitor and follow the support and service level agreements for Azure. They're supported in all Azure regions where Azure Monitor is available. Other curated visualizations provide less functionality, might not scale, and might have different agreements. Some might be based solely on Azure Monitor Workbooks, while others might have an extensive custom experience. +- **Metrics** - See [Supported metrics in Azure Monitor](essentials/metrics-supported.md). +- **Metric alerts** - See [Supported resources for metric alerts in Azure Monitor](/alerts/alerts-metric-near-real-time.md). +- **Prometheus metrics** - See [TBD](essentials/FILL ME IN.md). +- **Resource logs** - See [Supported categories for Azure Monitor resource logs](/essentials/resource-logs-categories.md). +- **Activity log** - All entries in the activity log are available for query, alerting and routing to Azure Monitor Logs store regardless of resource provider. -The following table lists the available curated visualizations and information about them. +## Services that require agents ->[!NOTE] -> Another type of older visualization called *monitoring solutions* is no longer in active development. The replacement technology is the Azure Monitor Insights, as mentioned. We suggest you use the Insights and not deploy new instances of solutions. For more information on the solutions, see [Monitoring solutions in Azure Monitor](./insights/solutions.md). +Azure Monitor can't see inside a service running its own application, operating system or container. That type of service requires one or more agents to be installed. The agent then runs as well to collect metrics, logs, traces and changes and forward them to Azure Monitor. The following services require agents for this reason. -|Name with docs link| State | [Azure portal link](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/more)| Description | -|:--|:--|:--|:--| -| [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | General availability (GA) | [Yes](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) | Azure Active Directory provides workbooks to understand the effect of your Conditional Access policies, troubleshoot sign-in failures, and identify legacy authentications. | -| [Azure Backup](../backup/backup-azure-monitoring-use-azuremonitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_DataProtection/BackupCenterMenuBlade/backupReportsConfigure/menuId/backupReportsConfigure) | Provides built-in monitoring and alerting capabilities in a Recovery Services vault. | -| [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/redisCacheInsights) | Provides a unified, interactive view of overall performance, failures, capacity, and operational health. | -| [Azure Cosmos DB Insights](../cosmos-db/cosmosdb-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/cosmosDBInsights) | Provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. | -| [Azure Container Insights](/azure/azure-monitor/insights/container-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/containerInsights) | Monitors the performance of container workloads that are deployed to managed Kubernetes clusters hosted on Azure Kubernetes Service. It gives you performance visibility by collecting metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. Container logs are also collected. After you enable monitoring from Kubernetes clusters, these metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. | -| [Azure Data Explorer Insights](/azure/data-explorer/data-explorer-insights) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/adxClusterInsights) | Azure Data Explorer Insights provides comprehensive monitoring of your clusters by delivering a unified view of your cluster performance, operations, usage, and failures. | -| [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | Preview | No | An Azure Monitor workbook that collects important performance metrics from your HDInsight cluster and provides the visualizations and dashboards for most common scenarios. Gives a complete view of a single HDInsight cluster including resource utilization and application status.| - | [Azure IoT Edge](../iot-edge/how-to-explore-curated-visualizations.md) | GA | No | Visualize and explore metrics collected from the IoT Edge device right in the Azure portal by using Azure Monitor Workbooks-based public templates. The curated workbooks use built-in metrics from the IoT Edge runtime. These views don't need any metrics instrumentation from the workload modules. | - | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/keyvaultsInsights) | Provides comprehensive monitoring of your key vaults by delivering a unified view of your Key Vault requests, performance, failures, and latency. | - | [Azure Monitor Application Insights](./app/app-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/applicationsInsights) | Extensible application performance management service that monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It uses the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations. It enables you to diagnose errors without waiting for a user to report them. Application Insights includes connection points to various development tools and integrates with Visual Studio to support your DevOps processes. | - | [Azure Monitor Log Analytics Workspace](./logs/log-analytics-workspace-insights-overview.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/lawsInsights) | Log Analytics Workspace Insights (preview) provides comprehensive monitoring of your workspaces through a unified view of your workspace usage, performance, health, agent, queries, and change log. This article will help you understand how to onboard and use Log Analytics Workspace Insights (preview). | - | [Azure Service Bus Insights](../service-bus-messaging/service-bus-insights.md) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/serviceBusInsights) | Azure Service Bus Insights provide a view of the overall performance, failures, capacity, and operational health of all your Service Bus resources in a unified interactive experience. | - | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/sqlWorkloadInsights) | A comprehensive interface for monitoring any product in the Azure SQL family. SQL Insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. Note: If you're just setting up SQL monitoring, use SQL Insights instead of the SQL Analytics solution. | - | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/storageInsights) | Provides comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. | - | [Azure Network Insights](../network-watcher/network-insights-overview.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/networkInsights) | Provides a comprehensive view of health and metrics for all your network resources. The advanced search capability helps you identify resource dependencies, enabling scenarios like identifying resources that are hosting your website, by simply searching for your website name. | - | [Azure Monitor for Resource Groups](./insights/resource-group-insights.md) | GA | No | Triage and diagnose any problems your individual resources encounter, while offering context for the health and performance of the resource group as a whole. | - | [Azure Monitor SAP](../virtual-machines/workloads/sap/monitor-sap-on-azure.md) | GA | No | An Azure-native monitoring product for anyone running their SAP landscapes on Azure. It works with both SAP on Azure Virtual Machines and SAP on Azure Large Instances. Collects telemetry data from Azure infrastructure and databases in one central location and visually correlates the data for faster troubleshooting. You can monitor different components of an SAP landscape, such as Azure virtual machines (VMs), high-availability clusters, SAP HANA database, and SAP NetWeaver, by adding the corresponding provider for that component. | - | [Azure Stack HCI Insights](/azure-stack/hci/manage/azure-stack-hci-insights) | Preview | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/azureStackHCIInsights) | Based on Azure Monitor Workbooks. Provides health, performance, and usage insights about registered Azure Stack HCI version 21H2 clusters that are connected to Azure and enrolled in monitoring. It stores its data in a Log Analytics workspace, which allows it to deliver powerful aggregation and filtering and analyze data trends over time. | - | [Azure VM Insights](/azure/azure-monitor/insights/vminsights-overview) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/virtualMachines) | Monitors your Azure VMs and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs and monitors their processes and dependencies on other resources and external processes. | - | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | GA | [Yes](https://portal.azure.com/#blade/Microsoft_Azure_WVD/WvdManagerMenuBlade/insights/menuId/insights) | Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks that helps IT professionals understand their Azure Virtual Desktop environments. | +- [Azure Cloud Services](../cloud-services-extended-support/index.yml) +- [Azure Virtual Machines](../virtual-machines/index.yml) +- [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) +- [Azure Service Fabric](../service-fabric/index.yml) ++In addition, applications also require either the Application Insights SDK or auto-instrumentation (via an agent) to collect information and write it to the Azure Monitor data platform. ++## Services with Insights ++Some services have curated monitoring experiences call "insights". Insights are meant to be a starting point for monitoring a service or set of services. Some insights may also automatically pull additional data that's not captured or stored in Azure Monitor. For more information on monitoring insights, see [Insights Overview](insights/insights-overview.md). ## Product integrations -The other services and older monitoring solutions in the following table store their data in a Log Analytics workspace so that it can be analyzed with other log data collected by Azure Monitor. +The services and [older monitoring solutions](insights/solutions.md) in the following table store their data in Azure Monitor Logs so that it can be analyzed with other log data collected by Azure Monitor. | Product/Service | Description | |:|:| The other services and older monitoring solutions in the following table store t | [Microsoft Teams Rooms](/microsoftteams/room-systems/azure-monitor-deploy) | Integrated, end-to-end management of Microsoft Teams Rooms devices. | | [Visual Studio App Center](/appcenter/) | Build, test, and distribute applications and then monitor their status and usage. See [Start analyzing your mobile app with App Center and Application Insights](app/mobile-center-quickstart.md). | | Windows | [Windows Update Compliance](/windows/deployment/update/update-compliance-get-started) - Assess your Windows desktop upgrades.<br>[Desktop Analytics](/configmgr/desktop-analytics/overview) - Integrates with Configuration Manager to provide insight and intelligence to make more informed decisions about the update readiness of your Windows clients. |-| **The following solutions also integrate with parts of Azure Monitor. Note that solutions are no longer under active development. Use [Insights](#insights-and-curated-visualizations) instead.** | | +| **The following solutions also integrate with parts of Azure Monitor. Note that solutions, which are based on Azure Monitor Logs and Log Analytics, are no longer under active development. Use [Insights](insights/insights-overview.md) instead.** | | | Network - [Network Performance Monitor solution](insights/network-performance-monitor.md) | | Network - [Azure Application Gateway solution](insights/azure-networking-analytics.md#azure-application-gateway-analytics) | . | [Office 365 solution](insights/solution-office-365.md) | Monitor your Office 365 environment. Updated version with improved onboarding available through Microsoft Sentinel. | Azure Monitor can collect data from resources outside of Azure by using the meth | Virtual machines | Use agents to collect data from the guest operating system of virtual machines in other cloud environments or on-premises. See [Overview of Azure Monitor agents](agents/agents-overview.md). | | REST API Client | Separate APIs are available to write data to Azure Monitor Logs and Metrics from any REST API client. See [Send log data to Azure Monitor with the HTTP Data Collector API](logs/data-collector-api.md) for Logs. See [Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md) for Metrics. | -## Azure supported services --The following table lists Azure services and the data they collect into Azure Monitor. --- **Metrics**: The service automatically collects metrics into Azure Monitor Metrics.-- **Logs**: The service supports diagnostic settings that can send metrics and platform logs into Azure Monitor Logs for analysis in Log Analytics.-- **Insight**: An insight is available that provides a customized monitoring experience for the service.--| Service | Resource provider namespace | Has metrics | Has logs | Insight | Notes -|||-|--|-|--| - | [Azure Active Directory Domain Services](../active-directory-domain-services/index.yml) | Microsoft.AAD/DomainServices | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftaaddomainservices) | | | - | [Azure Active Directory](../active-directory/index.yml) | No | No | [Azure Monitor Workbooks for Azure Active Directory](../active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md) | | - | [Azure Analysis Services](../analysis-services/index.yml) | Microsoft.AnalysisServices/servers | [**Yes**](./essentials/metrics-supported.md#microsoftanalysisservicesservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftanalysisservicesservers) | | | - | [Azure API Management](../api-management/index.yml) | Microsoft.ApiManagement/service | [**Yes**](./essentials/metrics-supported.md#microsoftapimanagementservice) | [**Yes**](./essentials/resource-logs-categories.md#microsoftapimanagementservice) | | | - | [Azure App Configuration](../azure-app-configuration/index.yml) | Microsoft.AppConfiguration/configurationStores | [**Yes**](./essentials/metrics-supported.md#microsoftappconfigurationconfigurationstores) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappconfigurationconfigurationstores) | | | - | [Azure Spring Apps](../spring-apps/overview.md) | Microsoft.AppPlatform/Spring | [**Yes**](./essentials/metrics-supported.md#microsoftappplatformspring) | [**Yes**](./essentials/resource-logs-categories.md#microsoftappplatformspring) | | | - | [Azure Attestation Service](../attestation/overview.md) | Microsoft.Attestation/attestationProviders | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftattestationattestationproviders) | | | - | [Azure Automation](../automation/index.yml) | Microsoft.Automation/automationAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftautomationautomationaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftautomationautomationaccounts) | | | - | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.AVS/privateClouds | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure Batch](../batch/index.yml) | Microsoft.Batch/batchAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftbatchbatchaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbatchbatchaccounts) | | | - | [Azure Batch](../batch/index.yml) | Microsoft.BatchAI/workspaces | No | No | | | - | [Azure Cognitive Services- Bing Search API](../cognitive-services/bing-web-search/index.yml) | Microsoft.Bing/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftmapsaccounts) | No | | | - | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/blockchainMembers | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure Blockchain Service](../blockchain/workbench/index.yml) | Microsoft.Blockchain/cordaMembers | No | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure Bot Service](/azure/bot-service/) | Microsoft.BotService/botServices | [**Yes**](./essentials/metrics-supported.md#microsoftbotservicebotservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftbotservicebotservices) | | | - | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/Redis | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | | - | [Azure Cache for Redis](../azure-cache-for-redis/index.yml) | Microsoft.Cache/redisEnterprise | [**Yes**](./essentials/metrics-supported.md#microsoftcacheredisenterprise) | No | [Azure Monitor for Azure Cache for Redis (preview)](../azure-cache-for-redis/redis-cache-insights-overview.md) | | - | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/CdnWebApplicationFirewallPolicies | [**Yes**](./essentials/metrics-supported.md#microsoftcdncdnwebapplicationfirewallpolicies) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdncdnwebapplicationfirewallpolicies) | | | - | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles | [**Yes**](./essentials/metrics-supported.md#microsoftcdnprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofiles) | | | - | [Azure Content Delivery Network](../cdn/index.yml) | Microsoft.Cdn/profiles/endpoints | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftcdnprofilesendpoints) | | | - | [Azure Virtual Machines - Classic](../virtual-machines/index.yml) | Microsoft.ClassicCompute/domainNames/slots/roles | [**Yes**](./essentials/metrics-supported.md#microsoftclassiccomputedomainnamesslotsroles) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | | - | [Azure Virtual Machines - Classic](../virtual-machines/index.yml) | Microsoft.ClassicCompute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftclassiccomputevirtualmachines) | No | | | - | [Azure Virtual Network (Classic)](../virtual-network/network-security-groups-overview.md) | Microsoft.ClassicNetwork/networkSecurityGroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftclassicnetworknetworksecuritygroups) | | | - | [Azure Storage (Classic)](../storage/index.yml) | Microsoft.ClassicStorage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccounts) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Blob Storage (Classic)](../storage/blobs/index.yml) | Microsoft.ClassicStorage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsblobservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Files (Classic)](../storage/files/index.yml) | Microsoft.ClassicStorage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsfileservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Queue Storage (Classic)](../storage/queues/index.yml) | Microsoft.ClassicStorage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountsqueueservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Table Storage (Classic)](../storage/tables/index.yml) | Microsoft.ClassicStorage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftclassicstoragestorageaccountstableservices) | No | [Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | Microsoft Cloud Test Platform | Microsoft.Cloudtest/hostedpools | [**Yes**](./essentials/metrics-supported.md) | No | | | - | Microsoft Cloud Test Platform | Microsoft.Cloudtest/pools | [**Yes**](./essentials/metrics-supported.md) | No | | | - | [Cray ClusterStor in Azure](https://azure.microsoft.com/blog/supercomputing-in-the-cloud-announcing-three-new-cray-in-azure-offers/) | Microsoft.ClusterStor/nodes | [**Yes**](./essentials/metrics-supported.md) | No | | | - | [Azure Cognitive Services](../cognitive-services/index.yml) | Microsoft.CognitiveServices/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftcognitiveservicesaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcognitiveservicesaccounts) | | | - | [Azure Communication Services](../communication-services/index.yml) | Microsoft.Communication/CommunicationServices | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservices) | No | | Agent required to monitor guest operating system and workflows.| - | [Azure Cloud Services](../cloud-services-extended-support/index.yml) | Microsoft.Compute/cloudServices/roles | [**Yes**](./essentials/metrics-supported.md#microsoftcomputecloudservicesroles) | No | | Agent required to monitor guest operating system and workflows.| - | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/disks | [**Yes**](./essentials/metrics-supported.md) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | | - | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| - | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| - | [Azure Virtual Machines](../virtual-machines/index.yml)<br />[Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) | Microsoft.Compute/virtualMachineScaleSets/virtualMachines | [**Yes**](./essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines) | No | [VM Insights](/azure/azure-monitor/insights/vminsights-overview) | Agent required to monitor guest operating system and workflows.| - | [Microsoft Connected Vehicle Platform](https://azure.microsoft.com/blog/microsoft-connected-vehicle-platform-trends-and-investment-areas/) | Microsoft.ConnectedVehicle/platformAccounts | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure Container Instances](../container-instances/index.yml) | Microsoft.ContainerInstance/containerGroups | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerinstancecontainergroups) | No | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | | - | [Azure Container Registry](../container-registry/index.yml) | Microsoft.ContainerRegistry/registries | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerregistryregistries) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerregistryregistries) | | | - | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.ContainerService/managedClusters | [**Yes**](./essentials/metrics-supported.md#microsoftcontainerservicemanagedclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcontainerservicemanagedclusters) | [Container Insights](/azure/azure-monitor/insights/container-insights-overview) | | - | [Azure Custom Providers](../azure-resource-manager/custom-providers/index.yml) | Microsoft.CustomProviders/resourceProviders | [**Yes**](./essentials/metrics-supported.md#microsoftcustomprovidersresourceproviders) | [**Yes**](./essentials/resource-logs-categories.md#microsoftcustomprovidersresourceproviders) | | | - | [Microsoft Dynamics 365 Customer Insights](/dynamics365/customer-insights/) | Microsoft.D365CustomerInsights/instances | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftd365customerinsightsinstances) | | | - | [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md) | Microsoft.DataBoxEdge/DataBoxEdgeDevices | [**Yes**](./essentials/metrics-supported.md#microsoftdataboxedgedataboxedgedevices) | No | | | - | [Azure Databricks](/azure/azure-databricks/) | Microsoft.Databricks/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatabricksworkspaces) | | | - | Project CI | Microsoft.DataCollaboration/workspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure Data Factory](../data-factory/index.yml) | Microsoft.DataFactory/dataFactories | [**Yes**](./essentials/metrics-supported.md#microsoftdatafactorydatafactories) | No | | | - | [Azure Data Factory](../data-factory/index.yml) | Microsoft.DataFactory/factories | [**Yes**](./essentials/metrics-supported.md#microsoftdatafactoryfactories) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatafactoryfactories) | | | - | [Azure Data Lake Analytics](../data-lake-analytics/index.yml) | Microsoft.DataLakeAnalytics/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatalakeanalyticsaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatalakeanalyticsaccounts) | | | - | [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) | Microsoft.DataLakeStore/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatalakestoreaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatalakestoreaccounts) | | | - | [Azure Data Share](../data-share/index.yml) | Microsoft.DataShare/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftdatashareaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdatashareaccounts) | | | - | [Azure Database for MariaDB](../mariadb/index.yml) | Microsoft.DBforMariaDB/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformariadbservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformariadbservers) | | | - | [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlflexibleservers) | | | - | [Azure Database for MySQL](../mysql/index.yml) | Microsoft.DBforMySQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbformysqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbformysqlservers) | | | - | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/flexibleServers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlflexibleservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlflexibleservers) | | | - | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serverGroupsv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | | - | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/servers | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlservers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlservers) | | | - | [Azure Database for PostgreSQL](../postgresql/index.yml) | Microsoft.DBforPostgreSQL/serversv2 | [**Yes**](./essentials/metrics-supported.md#microsoftdbforpostgresqlserversv2) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdbforpostgresqlserversv2) | | | - | [Microsoft Azure Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/applicationgroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationapplicationgroups) | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | | - | [Microsoft Azure Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/hostpools | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationhostpools) | [Azure Virtual Desktop Insights](../virtual-desktop/azure-monitor.md) | | - | [Microsoft Azure Virtual Desktop](../virtual-desktop/index.yml) | Microsoft.DesktopVirtualization/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftdesktopvirtualizationworkspaces) | | | - | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/ElasticPools | [**Yes**](./essentials/metrics-supported.md#microsoftdeviceselasticpools) | No | | | - | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/ElasticPools/IotHubTenants | [**Yes**](./essentials/metrics-supported.md#microsoftdeviceselasticpoolsiothubtenants) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdeviceselasticpoolsiothubtenants) | | | - | [Azure IoT Hub](../iot-hub/index.yml) | Microsoft.Devices/IotHubs | [**Yes**](./essentials/metrics-supported.md#microsoftdevicesiothubs) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdevicesiothubs) | | | - | [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml) | Microsoft.Devices/ProvisioningServices | [**Yes**](./essentials/metrics-supported.md#microsoftdevicesprovisioningservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdevicesprovisioningservices) | | | - | [Azure Digital Twins](../digital-twins/overview.md) | Microsoft.DigitalTwins/digitalTwinsInstances | [**Yes**](./essentials/metrics-supported.md#microsoftdigitaltwinsdigitaltwinsinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdigitaltwinsdigitaltwinsinstances) | | | - | [Azure Cosmos DB](../cosmos-db/index.yml) | Microsoft.DocumentDB/databaseAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftdocumentdbdatabaseaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftdocumentdbdatabaseaccounts) | [Azure Cosmos DB Insights](../cosmos-db/cosmosdb-insights-overview.md) | | - | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/domains | [**Yes**](./essentials/metrics-supported.md#microsofteventgriddomains) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgriddomains) | | | - | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/eventSubscriptions | [**Yes**](./essentials/metrics-supported.md#microsofteventgrideventsubscriptions) | No | | | - | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/extensionTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridextensiontopics) | No | | | - | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/partnerNamespaces | [**Yes**](./essentials/metrics-supported.md#microsofteventgridpartnernamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridpartnernamespaces) | | | - | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/partnerTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridpartnertopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridpartnertopics) | | | - | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/systemTopics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridsystemtopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridsystemtopics) | | | - | [Azure Grid](../event-grid/index.yml) | Microsoft.EventGrid/topics | [**Yes**](./essentials/metrics-supported.md#microsofteventgridtopics) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventgridtopics) | | | - | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/clusters | [**Yes**](./essentials/metrics-supported.md#microsofteventhubclusters) | No | 0 | | - | [Azure Event Hubs](../event-hubs/index.yml) | Microsoft.EventHub/namespaces | [**Yes**](./essentials/metrics-supported.md#microsofteventhubnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsofteventhubnamespaces) | 0 | | - | [Microsoft Experimentation Platform](https://www.microsoft.com/research/group/experimentation-platform-exp/) | microsoft.experimentation/experimentWorkspaces | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure HDInsight](../hdinsight/index.yml) | Microsoft.HDInsight/clusters | [**Yes**](./essentials/metrics-supported.md#microsofthdinsightclusters) | No | [Azure HDInsight (preview)](../hdinsight/log-analytics-migration.md#insights) | | - | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/services | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisservices) | [**Yes**](./essentials/resource-logs-categories.md#microsofthealthcareapisservices) | | | - | [Azure API for FHIR](../healthcare-apis/index.yml) | Microsoft.HealthcareApis/workspaces/iotconnectors | [**Yes**](./essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors) | No | | | - | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/networkfunctions | [**Yes**](./essentials/metrics-supported.md) | No | | | - | [Azure StorSimple](../storsimple/index.yml) | microsoft.hybridnetwork/virtualnetworkfunctions | [**Yes**](./essentials/metrics-supported.md) | No | | | - | [Azure Monitor](./index.yml) | microsoft.insights/autoscalesettings | [**Yes**](./essentials/metrics-supported.md#microsoftinsightsautoscalesettings) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightsautoscalesettings) | | | - | [Azure Monitor](./index.yml) | microsoft.insights/components | [**Yes**](./essentials/metrics-supported.md#microsoftinsightscomponents) | [**Yes**](./essentials/resource-logs-categories.md#microsoftinsightscomponents) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - | [Azure IoT Central](../iot-central/index.yml) | Microsoft.IoTCentral/IoTApps | [**Yes**](./essentials/metrics-supported.md#microsoftiotcentraliotapps) | No | | | - | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/managedHSMs | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultmanagedhsms) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultmanagedhsms) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | | - | [Azure Key Vault](../key-vault/index.yml) | Microsoft.KeyVault/vaults | [**Yes**](./essentials/metrics-supported.md#microsoftkeyvaultvaults) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkeyvaultvaults) | [Azure Key Vault Insights (preview)](../key-vault/key-vault-insights-overview.md) | | - | [Azure Kubernetes Service](../aks/index.yml) | Microsoft.Kubernetes/connectedClusters | [**Yes**](./essentials/metrics-supported.md) | No | | | - | [Azure Data Explorer](/azure/data-explorer/) | Microsoft.Kusto/clusters | [**Yes**](./essentials/metrics-supported.md#microsoftkustoclusters) | [**Yes**](./essentials/resource-logs-categories.md#microsoftkustoclusters) | | | - | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationAccounts | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicintegrationaccounts) | | | - | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/integrationServiceEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftlogicintegrationserviceenvironments) | No | | | - | [Azure Logic Apps](../logic-apps/index.yml) | Microsoft.Logic/workflows | [**Yes**](./essentials/metrics-supported.md#microsoftlogicworkflows) | [**Yes**](./essentials/resource-logs-categories.md#microsoftlogicworkflows) | | | - | [Azure Machine Learning](../machine-learning/index.yml) | Microsoft.MachineLearningServices/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftmachinelearningservicesworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftmachinelearningservicesworkspaces) | | | - | [Azure Maps](../azure-maps/index.yml) | Microsoft.Maps/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftmapsaccounts) | No | | | - | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservices) | | | - | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservicesliveevents) | No | | | - | [Azure Media Services](/azure/media-services/) | Microsoft.Medi#microsoftmediamediaservicesstreamingendpoints) | No | | | - | [Azure Media Services](/azure/media-services/) | Microsoft.Medi) | | | - | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/remoteRenderingAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityremoterenderingaccounts) | No | | | - | [Azure Spatial Anchors](../spatial-anchors/index.yml) | Microsoft.MixedReality/spatialAnchorsAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftmixedrealityspatialanchorsaccounts) | No | | | - | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypools) | No | | | - | [Azure NetApp Files](../azure-netapp-files/index.yml) | Microsoft.NetApp/netAppAccounts/capacityPools/volumes | [**Yes**](./essentials/metrics-supported.md#microsoftnetappnetappaccountscapacitypoolsvolumes) | No | | | - | [Azure Application Gateway](../application-gateway/index.yml) | Microsoft.Network/applicationGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkapplicationgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways) | | | - | [Azure Firewall](../firewall/index.yml) | Microsoft.Network/azureFirewalls | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkazurefirewalls) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkazurefirewalls) | | | - | [Azure Bastion](../bastion/index.yml) | Microsoft.Network/bastionHosts | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/connections | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkconnections) | No | | | - | [Azure DNS](../dns/index.yml) | Microsoft.Network/dnszones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkdnszones) | No | | | - | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteCircuits | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) | | | - | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRouteGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) | No | | | - | [Azure ExpressRoute](../expressroute/index.yml) | Microsoft.Network/expressRoutePorts | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkexpressrouteports) | No | | | - | [Azure Front Door](../frontdoor/index.yml) | Microsoft.Network/frontdoors | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkfrontdoors) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkfrontdoors) | | | - | [Azure Load Balancer](../load-balancer/index.yml) | Microsoft.Network/loadBalancers | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkloadbalancers) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkloadbalancers) | | | - | [Azure Load Balancer](../load-balancer/index.yml) | Microsoft.Network/natGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknatgateways) | No | | | - | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/networkInterfaces | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknetworkinterfaces) | No | [Azure Network Insights](../network-watcher/network-insights-overview.md) | | - | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/networkSecurityGroups | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworknetworksecuritygroups) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | | - | [Azure Network Watcher](../network-watcher/network-watcher-monitoring-overview.md) | Microsoft.Network/networkWatchers/connectionMonitors | [**Yes**](./essentials/metrics-supported.md#microsoftnetworknetworkwatchersconnectionmonitors) | No | | | - | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/p2sVpnGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkp2svpngateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkp2svpngateways) | | | - | [Azure DNS Private Zones](../dns/private-dns-privatednszone.md) | Microsoft.Network/privateDnsZones | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivatednszones) | No | | | - | [Azure Private Link](../private-link/private-link-overview.md) | Microsoft.Network/privateEndpoints | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivateendpoints) | No | | | - | [Azure Private Link](../private-link/private-link-overview.md) | Microsoft.Network/privateLinkServices | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkprivatelinkservices) | No | | | - | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/publicIPAddresses | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkpublicipaddresses) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkpublicipaddresses) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | | - | [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) | Microsoft.Network/trafficmanagerprofiles | [**Yes**](./essentials/metrics-supported.md#microsoftnetworktrafficmanagerprofiles) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworktrafficmanagerprofiles) | | | - | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/virtualHubs | [**Yes**](./essentials/metrics-supported.md) | No | | | - | [Azure VPN Gateway](../vpn-gateway/index.yml) | Microsoft.Network/virtualNetworkGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworkgateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworkgateways) | | | - | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualNetworks | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualnetworks) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvirtualnetworks) | [Azure Network Insights](../network-watcher/network-insights-overview.md) | | - | [Azure Virtual Network](../virtual-network/index.yml) | Microsoft.Network/virtualRouters | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvirtualrouters) | No | | | - | [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) | Microsoft.Network/vpnGateways | [**Yes**](./essentials/metrics-supported.md#microsoftnetworkvpngateways) | [**Yes**](./essentials/resource-logs-categories.md#microsoftnetworkvpngateways) | | | - | [Azure Notification Hubs](../notification-hubs/index.yml) | Microsoft.NotificationHubs/namespaces/notificationHubs | [**Yes**](./essentials/metrics-supported.md#microsoftnotificationhubsnamespacesnotificationhubs) | No | | | - | [Azure Monitor](./index.yml) | Microsoft.OperationalInsights/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftoperationalinsightsworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftoperationalinsightsworkspaces) | | | - | [Azure Peering Service](../peering-service/index.yml) | Microsoft.Peering/peerings | [**Yes**](./essentials/metrics-supported.md#microsoftpeeringpeerings) | No | | | - | [Azure Peering Service](../peering-service/index.yml) | Microsoft.Peering/peeringServices | [**Yes**](./essentials/metrics-supported.md#microsoftpeeringpeeringservices) | No | | | - | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenants) | | | - | [Microsoft Power BI](/power-bi/power-bi-overview) | Microsoft.PowerBI/tenants/workspaces | No | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbitenantsworkspaces) | | | - | [Power BI Embedded](/azure/power-bi-embedded/) | Microsoft.PowerBIDedicated/capacities | [**Yes**](./essentials/metrics-supported.md#microsoftpowerbidedicatedcapacities) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpowerbidedicatedcapacities) | | | - | [Microsoft Purview](../purview/index.yml) | Microsoft.Purview/accounts | [**Yes**](./essentials/metrics-supported.md#microsoftpurviewaccounts) | [**Yes**](./essentials/resource-logs-categories.md#microsoftpurviewaccounts) | | | - | [Azure Site Recovery](../site-recovery/index.yml) | Microsoft.RecoveryServices/vaults | [**Yes**](./essentials/metrics-supported.md) | [**Yes**](./essentials/resource-logs-categories.md) | | | - | [Azure Relay](../azure-relay/relay-what-is-it.md) | Microsoft.Relay/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftrelaynamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftrelaynamespaces) | | | - | [Azure Resource Manager](../azure-resource-manager/index.yml) | Microsoft.Resources/subscriptions | [**Yes**](./essentials/metrics-supported.md) | No | | | - | [Azure Cognitive Search](../search/index.yml) | Microsoft.Search/searchServices | [**Yes**](./essentials/metrics-supported.md#microsoftsearchsearchservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsearchsearchservices) | | | - | [Azure Service Bus](/azure/service-bus/) | Microsoft.ServiceBus/namespaces | [**Yes**](./essentials/metrics-supported.md#microsoftservicebusnamespaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftservicebusnamespaces) | [Azure Service Bus](/azure/service-bus/) | | - | [Azure Service Fabric](../service-fabric/index.yml) | Microsoft.ServiceFabric | No | No | [Service Fabric](../service-fabric/index.yml) | Agent required to monitor guest operating system and workflows.| - | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/SignalR | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicesignalr) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicesignalr) | | | - | [Azure SignalR Service](../azure-signalr/index.yml) | Microsoft.SignalRService/WebPubSub | [**Yes**](./essentials/metrics-supported.md#microsoftsignalrservicewebpubsub) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsignalrservicewebpubsub) | | | - | [Azure SQL Managed Instance](/azure/azure-sql/database/monitoring-tuning-index) | Microsoft.Sql/managedInstances | [**Yes**](./essentials/metrics-supported.md#microsoftsqlmanagedinstances) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsqlmanagedinstances) | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | | - | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/databases | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserversdatabases) | No | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | | - | [Azure SQL Database](/azure/azure-sql/database/index) | Microsoft.Sql/servers/elasticpools | [**Yes**](./essentials/metrics-supported.md#microsoftsqlserverselasticpools) | No | [Azure SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) | | - | [Azure Storage](../storage/index.yml) | Microsoft.Storage/storageAccounts | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccounts) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Blob Storage](../storage/blobs/index.yml) | Microsoft.Storage/storageAccounts/blobServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsblobservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsblobservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Files](../storage/files/index.yml) | Microsoft.Storage/storageAccounts/fileServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsfileservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Queue Storage](../storage/queues/index.yml) | Microsoft.Storage/storageAccounts/queueServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountsqueueservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountsqueueservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Table Storage](../storage/tables/index.yml) | Microsoft.Storage/storageAccounts/tableServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragestorageaccountstableservices) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstoragestorageaccountstableservices) | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure HPC Cache](../hpc-cache/index.yml) | Microsoft.StorageCache/caches | [**Yes**](./essentials/metrics-supported.md#microsoftstoragecachecaches) | No | | | - | [Azure Storage](../storage/index.yml) | Microsoft.StorageSync/storageSyncServices | [**Yes**](./essentials/metrics-supported.md#microsoftstoragesyncstoragesyncservices) | No | [Azure Storage Insights](/azure/azure-monitor/insights/storage-insights-overview) | | - | [Azure Stream Analytics](../stream-analytics/index.yml) | Microsoft.StreamAnalytics/streamingjobs | [**Yes**](./essentials/metrics-supported.md#microsoftstreamanalyticsstreamingjobs) | [**Yes**](./essentials/resource-logs-categories.md#microsoftstreamanalyticsstreamingjobs) | | | - | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspaces) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspaces) | | | - | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/bigDataPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacesbigdatapools) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspacesbigdatapools) | | | - | [Azure Synapse Analytics](/azure/sql-data-warehouse/) | Microsoft.Synapse/workspaces/sqlPools | [**Yes**](./essentials/metrics-supported.md#microsoftsynapseworkspacessqlpools) | [**Yes**](./essentials/resource-logs-categories.md#microsoftsynapseworkspacessqlpools) | | | - | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironments) | | | - | [Azure Time Series Insights](../time-series-insights/index.yml) | Microsoft.TimeSeriesInsights/environments/eventsources | [**Yes**](./essentials/metrics-supported.md#microsofttimeseriesinsightsenvironmentseventsources) | [**Yes**](./essentials/resource-logs-categories.md#microsofttimeseriesinsightsenvironmentseventsources) | | | - | [Azure VMware Solution](../azure-vmware/index.yml) | Microsoft.VMwareCloudSimple/virtualMachines | [**Yes**](./essentials/metrics-supported.md) | No | | | - | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/connections | [**Yes**](./essentials/metrics-supported.md#microsoftwebconnections) | No | | | - | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironments) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebhostingenvironments) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments/multiRolePools | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironmentsmultirolepools) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/hostingEnvironments/workerPools | [**Yes**](./essentials/metrics-supported.md#microsoftwebhostingenvironmentsworkerpools) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/serverFarms | [**Yes**](./essentials/metrics-supported.md#microsoftwebserverfarms) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites | [**Yes**](./essentials/metrics-supported.md#microsoftwebsites) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebsites) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/sites/slots | [**Yes**](./essentials/metrics-supported.md#microsoftwebsitesslots) | [**Yes**](./essentials/resource-logs-categories.md#microsoftwebsitesslots) | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - | [Azure App Service](../app-service/index.yml)<br />[Azure Functions](../azure-functions/index.yml) | Microsoft.Web/staticSites | [**Yes**](./essentials/metrics-supported.md#microsoftwebstaticsites) | No | [Azure Monitor Application Insights](./app/app-insights-overview.md) | | - ## Next steps - Read more about the [Azure Monitor data platform that stores the logs and metrics collected by insights and solutions](data-platform.md). |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | A few examples of what you can do with Azure Monitor include: ## Overview The following diagram gives a high-level view of Azure Monitor. -- At the center of the diagram are the data stores for metrics and logs and changes, which are the fundamental types of data used by Azure Monitor. -- On the left are the [sources of monitoring data](data-sources.md) that populate these [data stores](data-platform.md). -- On the right are the different functions that Azure Monitor performs with this collected data. This includes such actions as analysis, alerting, and integration such as streaming to external systems.+- The stores for the **[data platform](data-platform.md)** are at the center of the diagram. Azure Monitor stores these fundamental types of data: metrics, logs, traces, and changes. +- The **[sources of monitoring data](data-sources.md)** that populate these data stores are on the left. +- The different functions that Azure Monitor performs with this collected data are on the right. This includes such actions as analysis, alerting. +- At the bottom is a layer of integration pieces. These are actually integrated throughout other parts of the diagram, but that is too complex to show visually. :::image type="content" source="media/overview/azure-monitor-overview-2022_10_15-add-prometheus-opt.svg" alt-text="Diagram that shows an overview of Azure Monitor." border="false" lightbox="media/overview/azure-monitor-overview-2022_10_15-add-prometheus-opt.svg"::: -The following video uses an earlier version of the preceding diagram, but its explanations are still relevant. --> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4qXeL] -- ## Observability and the Azure Monitor data platform Metrics, logs, and distributed traces are commonly referred to as the three pillars of observability. Observability can be achieved by aggregating and correlating these different types of data across the entire system being monitored. -Natively, Azure Monitor stores data as metrics, logs, or changes. Traces are stored in the Logs store. Each storage platform is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. Features such as data analysis, visualizations, or alerting require you to understand the differences so that you can implement your required scenario in the most efficient and cost effective manner. +Natively, Azure Monitor stores data as metrics, logs, or changes. Traces are stored in the Logs store. Each storage platform is optimized for particular monitoring scenarios, and each supports different features in Azure Monitor. It's important for you to understand the differences between features such as data analysis, visualizations, or alerting, so that you can implement your required scenario in the most efficient and cost effective manner. | Pillar | Description | |:|:|-| Metrics | Metrics are numerical values that describe some aspect of a system at a particular point in time. They are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using various algorithms, compared to other metrics, and analyzed for trends over time.<br><br>Metrics in Azure Monitor are stored in a time-series database, which is optimized for analyzing time-stamped data. For more information, see [Azure Monitor Metrics](essentials/data-platform-metrics.md). | -| Logs | [Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.<br><br>Azure Monitor stores logs the Azure Monitor Logs store. The store allows you to segregate logs into separate "Log Analytics workspaces". There you can analyze them using the Log Analytics tool. Log Analytics workspaces are based on [Azure Data Explorer](/azure/data-explorer/), which provides a powerful analysis engine and the [Kusto rich query language](/azure/kusto/query/). For more information, see [Azure Monitor Logs](logs/data-platform-logs.md). | +| Metrics | Metrics are numerical values that describe some aspect of a system at a particular point in time. Metrics are collected at regular intervals and are identified with a timestamp, a name, a value, and one or more defining labels. Metrics can be aggregated using various algorithms, compared to other metrics, and analyzed for trends over time.<br><br>Metrics in Azure Monitor are stored in a time-series database, which is optimized for analyzing time-stamped data. For more information, see [Azure Monitor Metrics](essentials/data-platform-metrics.md). | +| Logs | [Logs](logs/data-platform-logs.md) are events that occurred within the system. They can contain different kinds of data and may be structured or free-form text with a timestamp. They may be created sporadically as events in the environment generate log entries, and a system under heavy load will typically generate more log volume.<br><br>Azure Monitor stores logs in the Azure Monitor Logs store. The store allows you to segregate logs into separate "Log Analytics workspaces". There you can analyze them using the Log Analytics tool. Log Analytics workspaces are based on [Azure Data Explorer](/azure/data-explorer/), which provides a powerful analysis engine and the [Kusto rich query language](/azure/kusto/query/). For more information, see [Azure Monitor Logs](logs/data-platform-logs.md). | | Distributed traces | Traces are series of related events that follow a user request through a distributed system. They can be used to determine behavior of application code and the performance of different transactions. While logs will often be created by individual components of a distributed system, a trace measures the operation and performance of your application across the entire set of components.<br><br>Distributed tracing in Azure Monitor is enabled with the [Application Insights SDK](app/distributed-tracing.md). Trace data is stored with other application log data collected by Application Insights and stored in Azure Monitor Logs. For more information, see [What is Distributed Tracing?](app/distributed-tracing.md). | | Changes | Changes are tracked using [Change Analysis](change/change-analysis.md). Changes are a series of events that occur in your Azure application and resources. Change Analysis is a subscription-level observability tool that's built on the power of Azure Resource Graph. <br><br> Once Change Analysis is enabled, the `Microsoft.ChangeAnalysis` resource provider is registered with an Azure Resource Manager subscription. Change Analysis' integrations with Monitoring and Diagnostics tools provide data to help users understand what changes might have caused the issues. Read more about Change Analysis in [Use Change Analysis in Azure Monitor](./change/change-analysis.md). | Azure Monitor aggregates and correlates data across multiple Azure subscriptions and tenants, in addition to hosting data for other services. Because this data is stored together, it can be correlated and analyzed using a common set of tools. - > [!NOTE] > It's important to distinguish between Azure Monitor Logs and sources of log data in Azure. For example, subscription level events in Azure are written to an [activity log](essentials/platform-logs-overview.md) that you can view from the Azure Monitor menu. Most resources will write operational information to a [resource log](essentials/platform-logs-overview.md) that you can forward to different locations. Azure Monitor Logs is a log data platform that collects activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources. Change Analysis alerts you to live site issues, outages, component failures, or Change Analysis builds on [Azure Resource Graph](../governance/resource-graph/overview.md) to provide a historical record of how your Azure resources have changed over time. It detects managed identities, platform operating system upgrades, and hostname changes. Change Analysis securely queries IP configuration rules, TLS settings, and extension versions to provide more detailed change data. -## What data does Azure Monitor collect? +## What data can Azure Monitor collect? Azure Monitor can collect data from [sources](monitor-reference.md) that range from your application to any operating system and services it relies on, down to the platform itself. Azure Monitor collects data from each of the following tiers: - **Application** - Data about the performance and functionality of the code you've written, regardless of its platform.+- **Container** - Data about containers and applications running inside containers, such as Azure Kubernetes. - **Guest operating system** - Data about the operating system on which your application is running. The system could be running in Azure, another cloud, or on-premises.-- **Azure resource** - Data about the operation of an Azure resource. For a complete list of the resources that have metrics or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md#azure-supported-services).+- **Azure resource** - Data about the operation of an Azure resource. For a list of the resources that have metrics and/or logs, see [What can you monitor with Azure Monitor?](monitor-reference.md). - **Azure subscription** - Data about the operation and management of an Azure subscription, and data about the health and operation of Azure itself. - **Azure tenant** - Data about the operation of tenant-level Azure services, such as Azure Active Directory. - **Azure resource changes** - Data about changes within your Azure resources and how to address and triage incidents and issues. Azure Monitor can collect log data from any REST client by using the [Data Colle Monitoring data is only useful if it can increase your visibility into the operation of your computing environment. Some Azure resource providers have a "curated visualization," which gives you a customized monitoring experience for that particular service or set of services. They generally require minimal configuration. Larger, scalable, curated visualizations are known as "insights" and marked with that name in the documentation and the Azure portal. -For more information, see [List of insights and curated visualizations using Azure Monitor](monitor-reference.md#insights-and-curated-visualizations). Some of the larger insights are described here. +For more information, see [List of insights and curated visualizations using Azure Monitor](insights/insights-overview.md). Some of the larger insights are described here. ### Application Insights You'll often have the requirement to integrate Azure Monitor with other systems ### API -Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have essentially unlimited possibilities to build custom solutions that integrate with Azure Monitor. ---+Multiple APIs are available to read and write metrics and logs to and from Azure Monitor in addition to accessing generated alerts. You can also configure and retrieve alerts. With APIs, you have unlimited possibilities to build custom solutions that integrate with Azure Monitor. ## Next steps Learn more about:- * [Metrics and logs](./data-platform.md#metrics) for the data collected by Azure Monitor. * [Data sources](data-sources.md) for how the different components of your application send telemetry. * [Log queries](logs/log-query-overview.md) for analyzing collected data.-* [Best practices](/azure/architecture/best-practices/monitoring) for monitoring cloud applications and services. +* [Best practices](/azure/architecture/best-practices/monitoring) for monitoring cloud applications and services. |
azure-monitor | Terminology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/terminology.md | Operations Management Suite (OMS) was a bundling of the following Azure manageme - Log Analytics - Site Recovery -[New pricing has been introduced for these services](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/), and the OMS bundling is no longer available for new customers. None of the services that were part of OMS have changed, except for the consolidation into Azure Monitor described above. ---+[New pricing has been introduced for these services](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/), and the OMS bundling is no longer available for new customers. None of the services that were part of OMS have changed, except for the consolidation into Azure Monitor described above. The OMS portal was retired and is no longer available. ## Next steps - Read an [overview of Azure Monitor](overview.md) that describes its different components and features.-- Learn about the [transition of the OMS portal](./logs/oms-portal-transition.md). |
azure-monitor | Tutorial Monitor Vm Alert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert.md | Title: Tutorial - Alert when Azure virtual is down + Title: Alert when Azure virtual is down description: Create an alert rule in Azure Monitor to proactively notify you if a virtual machine is unavailable. Last updated 11/04/2021 -# Tutorial: Create alert when Azure virtual machine is unavailable +# Create alert when Azure virtual machine is unavailable One of the most common alerting conditions for a virtual machine is whether the virtual machine is running. Once you enable monitoring with VM insights in Azure Monitor for the virtual machine, a heartbeat is sent to Azure Monitor every minute. You can create a log query alert rule that sends an alert if a heartbeat isn't detected. This method not only alerts if the virtual machine isn't running, but also if it's not responsive. -In this tutorial, you learn how to: +In this article, you learn how to: > [!div class="checklist"] > * View log data collected by VM insights in Azure Monitor for a virtual machine. > * Create an alert rule from log data that will proactively notify you if the virtual machine is unavailable. ## Prerequisites-To complete this tutorial you need the following: +To complete the steps in this article you need the following: - An Azure virtual machine to monitor. - Monitoring with VM insights enabled for the virtual machine. See [Tutorial: Enable monitoring for Azure virtual machine](tutorial-monitor-vm-enable.md). |
azure-monitor | Tutorial Monitor Vm Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-enable.md | Title: Tutorial - Enable monitoring for Azure virtual machine + Title: Enable monitoring for Azure virtual machine description: Enable monitoring with VM insights in Azure Monitor to monitor an Azure virtual machine. Last updated 11/04/2021 -# Tutorial: Enable monitoring for Azure virtual machine -To monitor the health and performance of an Azure virtual machine, you need to install an agent to collect data from its guest operating system. VM insights is a feature of Azure Monitor for monitoring the guest operating system and workloads running on Azure virtual machines. When you enable monitoring for an Azure virtual machine, it installs the necessary agents and starts collecting performance, process, and dependency information from the guest operating system. +# Enable monitoring for Azure virtual machine +To monitor the health and performance of an Azure virtual machine, you need to install an agent to collect data from its guest operating system. VM insights monitors the guest operating system and workloads running on Azure virtual machines. When you enable monitoring for an Azure virtual machine, it installs the necessary agents and starts collecting performance, process, and dependency information from the guest operating system. > [!NOTE] > If you're completely new to Azure Monitor, you should start with [Tutorial: Monitor Azure resources with Azure Monitor](../essentials/monitor-azure-resource.md). Azure virtual machines generate similar monitoring data as other Azure resources such as platform metrics and Activity log. This tutorial describes how to enable additional monitoring unique to virtual machines. In this tutorial, you learn how to: > [!div class="checklist"] > * Create a Log Analytics workspace to collect performance and log data from the virtual machine. > * Enable VM insights for the virtual machine which installs the required agents and begins data collection. -> * Inspect graphs analyzing performance data collected form the virtual machine. +> * Inspect graphs analyzing performance data collected from the virtual machine. > * Inspect map showing processes running on the virtual machine and dependencies with other systems. In this tutorial, you learn how to: > VM insights installs the Log Analytics agent which collects performance data from the guest operating system of virtual machines. It doesn't collect logs from the guest operating system and doesn't send performance data to Azure Monitor Metrics. For this functionality, see [Tutorial: Collect guest logs and metrics from Azure virtual machine](tutorial-monitor-vm-guest.md). ## Prerequisites-To complete this tutorial you need the following: +To complete this tutorial, you need the following: - An Azure virtual machine to monitor. To complete this tutorial you need the following: ## Enable monitoring-Select **Insights** from your virtual machine's menu in the Azure portal. If VM insights hasn't yet been enabled for it, you should see a screen similar to the following allowing you to enable monitoring. Click **Enable**. +Select **Insights** from your virtual machine's menu in the Azure portal. If VM insights hasn't been enabled, you should see a screen similar to the following allowing you to enable monitoring. Click **Enable**. > [!NOTE] > If you selected the option to **Enable detailed monitoring** when you created your virtual machine, VM insights may already be enabled. Select your workspace and click **Enable** again. This is the workspace where data collected by VM insights will be sent. |
azure-monitor | Tutorial Monitor Vm Guest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md | Title: Tutorial - Collect guest logs and metrics from Azure virtual machine + Title: Collect guest logs and metrics from Azure virtual machine description: Create data collection rule to collect guest logs and metrics from Azure virtual machine. Last updated 11/08/2021 -# Tutorial: Collect guest logs and metrics from Azure virtual machine +# Collect guest logs and metrics from Azure virtual machine When you [enable monitoring with VM insights](tutorial-monitor-vm-enable.md), it collects performance data using the Log Analytics agent. To collect logs from the guest operating system and to send performance data to Azure Monitor Metrics, install the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) and create a [data collection rule](../essentials/data-collection-rule-overview.md) (DCR) that defines the data to collect and where to send it. > [!NOTE] |
azure-netapp-files | Azacsnap Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md | This is a list of technical articles where AzAcSnap has been used as part of a d * [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408) * [Manual Recovery Guide for SAP HANA on Azure Large Instance from storage snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-large-instance-from/ba-p/3242347) * [Automating SAP system copy operations with Libelle SystemCopy](https://docs.netapp.com/us-en/netapp-solutions-sap/lifecycle/libelle-sc-overview.html)+* [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620) ## Command synopsis |
azure-netapp-files | Understand Guidelines Active Directory Domain Service Site | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md | Azure NetApp Files supports the use of [Active Directory integrated DNS](/window Ensure that you meet the following requirements about the DNS configurations: * If you're using standalone DNS servers: -* Ensure that DNS servers have network connectivity to the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes. + * Ensure that DNS servers have network connectivity to the Azure NetApp Files delegated subnet hosting the Azure NetApp Files volumes. * Ensure that network ports UDP 53 and TCP 53 are not blocked by firewalls or NSGs. * Ensure that [the SRV records registered by the AD DS Net Logon service](https://social.technet.microsoft.com/wiki/contents/articles/7608.srv-records-registered-by-net-logon.aspx) have been created on the DNS servers. * Ensure that the PTR records for the SRV records registered by the AD DS Net Logon service have been created on the DNS servers. |
azure-resource-manager | Networking Move Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md | Title: Move Azure Networking resources to new subscription or resource group description: Use Azure Resource Manager to move virtual networks and other networking resources to a new resource group or subscription. Previously updated : 08/16/2022 Last updated : 10/28/2022 # Move networking resources to new resource group or subscription If you want to move networking resources to a new region, see [Tutorial: Move Az ## Dependent resources > [!NOTE]-> Any resource, including a VPN Gateway, that is associated with a public IP Standard SKU address must be disassociated from the public IP address before moving across subscriptions. +> Any resource, including a VPN Gateway, that is associated with a public IP Standard SKU address can't be moved across subscriptions. For virtual machines, you can [disassociate the public IP address](../../../virtual-network/ip-services/remove-public-ip-address-vm.md) before moving across subscriptions. When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group. |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Title: Move operation support by resource type description: Lists the Azure resource types that can be moved to a new resource group, subscription, or region. Previously updated : 08/29/2022 Last updated : 10/28/2022 # Move operation support for resources Jump to a resource provider namespace: > | privateendpointredirectmaps | No | No | No | > | privateendpoints | Yes - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | Yes - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | No | > | privatelinkservices | No | No | No |-> | publicipaddresses | Yes | Yes | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). | +> | publicipaddresses | Yes | Yes - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). | > | publicipprefixes | Yes | Yes | No | > | routefilters | No | No | No | > | routetables | Yes | Yes | No | Jump to a resource provider namespace: > | trafficmanagerprofiles / heatmaps | No | No | No | > | trafficmanagerusermetricskeys | No | No | No | > | virtualhubs | No | No | No |-> | virtualnetworkgateways | Yes | Yes | No | +> | virtualnetworkgateways | Yes | Yes - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | No | > | virtualnetworks | Yes | Yes | No | > | virtualnetworktaps | No | No | No | > | virtualrouters | Yes | Yes | No | |
azure-video-indexer | Indexing Configuration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md | + + Title: Indexing configuration guide +description: This article explains the configuration options of indexing process with Azure Video Indexer. + Last updated : 11/01/2022+++++# The indexing configuration guide ++It's important to understand the configuration options to index efficiently while ensuring you meet your indexing objectives. When indexing videos, users can use the default settings or adjust many of the settings. Azure Video Indexer allows you to choose between a range of language, indexing, custom models, and streaming settings that have implications on the insights generated, cost, and performance. ++This article explains each of the options and the impact of each option to enable informed decisions when indexing. The article discusses the [Azure Video Indexer website](https://www.videoindexer.ai/) experience but the same options apply when submitting jobs through the API (see the [API guide](video-indexer-use-apis.md)). When indexing large volumes, follow the [at-scale guide](considerations-when-use-at-scale.md). ++The initial upload screen presents options to define the video name, source language, and privacy settings. +++All the other setting options appear if you select Advanced options. +++## Default settings ++By default, Azure Video Indexer is configured to a **Video source language** of English, **Privacy** of private, **Standard** audio and video setting, and **Streaming quality** of single bitrate. ++> [!TIP] +> This topic describes each indexing option in detail. ++Below are a few examples of when using the default setting might not be a good fit: ++- If you need insights observed people or matched person that is only available through Advanced Video. +- If you're only using Azure Video Indexer for transcription and translation, indexing of both audio and video isnΓÇÖt required, **Basic** for audio should suffice. +- If you're consuming Azure Video Indexer insights but have no need to generate a new media file, streaming isn't necessary and **No streaming** should be selected to avoid the encoding job and its associated cost. +- If a video is primarily in a language that isn't English. ++### Video source language ++If you're aware of the language spoken in the video, select the language from the video source language list. If you're unsure of the language of the video, choose **Auto-detect single language**. When uploading and indexing your video, Azure Video Indexer will use language identification (LID) to detect the videos language and generate transcription and insights with the detected language. ++If the video may contain multiple languages and you aren't sure which ones, select **Auto-detect multi-language**. In this case, multi-language (MLID) detection will be applied when uploading and indexing your video. ++While auto-detect is a great option when the language in your videos varies, there are two points to consider when using LID or MLID: ++- LID/MLID don't support all the languages supported by Azure Video Indexer. +- The transcription is of a higher quality when you pre-select the videoΓÇÖs appropriate language. ++Learn more about [language support and supported languages](language-support.md). ++### Privacy ++This option allows you to determine if the insights should only be accessible to users in your Azure Video Indexer account or to anyone with a link. ++### Indexing options ++When indexing a video with the default settings, beware each of the audio and video indexing options may be priced differently. See [Azure Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/) for details. ++Below are the indexing type options with details of their insights provided. To modify the indexing type, select **Advanced settings**. ++|Audio only|Video only |Audio & Video | +|||| +|Basic ||| +|Standard| Standard |Standard | +|Advanced |Advanced|Advanced | ++## Advanced settings ++### Audio only ++- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, named entities (brands, locations, people), and topics. +- **Standard**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and topics. +- **Advanced**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and articles. ++### Video only ++- **Standard**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), named entities (OCR - brands, locations, people), OCR, people, scenes (keyframes and shots), and topics (OCR). +- **Advanced**: Indexes and extract insights by using video only (ignoring audio) and provides the following insights: labels (OCR), matched person (preview), named entities (OCR - brands, locations, people), OCR, observed people (preview), people, scenes (keyframes and shots), and topics (OCR). ++### Audio and Video ++- **Standard**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, named entities (brands, locations, people), OCR, people, sentiments, speakers, and topics. +- **Advanced**: Indexes and extract insights by using audio and video and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, matched person (preview), named entities (brands, locations, people), OCR, observed people (preview), people, sentiments, speakers, and topics. ++### Streaming quality options ++When indexing a video, you can decide if encoding of the file should occur which will enable streaming. The sequence is as follows: ++Upload > Encode (optional) > Index & Analysis > Publish for streaming (optional) ++Encoding and streaming operations are performed by and billed by Azure Media Services. There are two additional operations associated with the creation of a streaming video: ++- The creation of a Streaming Endpoint. +- Egress traffic ΓÇô the volume depends on the number of video playbacks, video playback length, and the video quality (bitrate). + +There are several aspects that influence the total costs of the encoding job. The first is if the encoding is with single or adaptive streaming. This will create either a single output or multiple encoding quality outputs. Each output is billed separately and depends on the source quality of the video you uploaded to Azure Video Indexer. ++For Media Services encoding pricing details, see [pricing](https://azure.microsoft.com/pricing/details/media-services/#pricing). ++When indexing a video, default streaming settings are applied. Below are the streaming type options that can be modified if you, select **Advanced** settings and go to **Streaming quality**. ++|Single bitrate|Adaptive bitrate| No streaming | +|||| ++- **Single bitrate**: With Single Bitrate, the standard Media Services encoder cost will apply for the output. If the video height is greater than or equal to 720p HD, Azure Video Indexer encodes it with a resolution of 1280 x 720. Otherwise, it's encoded as 640 x 468. The default setting is content-aware encoding. +- **Adaptive bitrate**: With Adaptive Bitrate, if you upload a video in 720p HD single bitrate to Azure Video Indexer and select Adaptive Bitrate, the encoder will use the [AdaptiveStreaming](/rest/api/media/transforms/create-or-update?tabs=HTTP#encodernamedpreset) preset. An output of 720p HD (no output exceeding 720p HD is created) and several lower quality outputs are created (for playback on smaller screens/low bandwidth environments). Each output will use the Media Encoder Standard base price and apply a multiplier for each output. The multiplier is 2x for HD, 1x for non-HD, and 0.25 for audio and billing is per minute of the input video. ++ **Example**: If you index a video in the US East region that is 40 minutes in length and is 720p HP and have selected the streaming option of Adaptive Bitrate, 3 outputs will be created - 1 HD (multiplied by 2), 1 SD (multiplied by 1) and 1 audio track (multiplied by 0.25). This will total to (2+1+0.25) * 40 = 130 billable output minutes. ++ Output minutes (standard encoder): 130 x $0.015/minute = $1.95. +- **No streaming**: Insights are generated but no streaming operation is performed and the video isn't available on the Azure Video Indexer website. When No streaming is selected, you aren't billed for encoding. ++### Customizing content models - People/Animated characters and Brand categories ++Azure Video Indexer allows you to customize some of its models to be adapted to your specific use case. These models include animated characters, brands, language, and person. If you have customized models, this section enables you to configure if one of the created models should be used for the indexing. ++## Next steps ++Learn more about [language support and supported languages](language-support.md). |
azure-video-indexer | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md | This section describes languages supported by Azure Video Indexer API. - Frame patterns (Only to Hebrew as of now) - Language customization -| **Language** | **Code** | **Transcription** | **LID**\* | **MLID**\* | **Translation** | **Customization** (language model) | -|::|:--:|:--:|:-:|:-:|:-:|::| +| **Language** | **Code** | **Transcription** | **LID** | **MLID** | **Translation** | **Customization** (language model) | +|::|:--:|:--:|:--:|:--:|:-:|::| | Afrikaans | `af-ZA` | | | | | Γ£ö | | Arabic (Israel) | `ar-IL` | Γ£ö | | | | Γ£ö | | Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | This section describes languages supported by Azure Video Indexer API. | Bulgarian | `bg-BG` | | | | Γ£ö | | | Catalan | `ca-ES` | | | | Γ£ö | | | Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |-| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö\*| | Γ£ö | Γ£ö | +| Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö\*<br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| | Γ£ö | Γ£ö | | Chinese (Traditional) | `zh-Hant` | | | | Γ£ö | | | Croatian | `hr-HR` | | | | Γ£ö | | | Czech | `cs-CZ` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | This section describes languages supported by Azure Video Indexer API. | Dutch | `nl-NL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | English Australia | `en-AU` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | English United Kingdom | `en-GB` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |-| English United States | `en-US` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö | +| English United States | `en-US` | Γ£ö | Γ£ö\*<br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö | | Estonian | `et-EE` | | | | Γ£ö | | | Fijian | `en-FJ` | | | | Γ£ö | | | Filipino | `fil-PH` | | | | Γ£ö | | | Finnish | `fi-FI` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |-| French | `fr-FR` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö | +| French | `fr-FR` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö | | French (Canada) | `fr-CA` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |-| German | `de-DE` | Γ£ö | Γ£ö \*| Γ£ö \*| Γ£ö | Γ£ö | +| German | `de-DE` | Γ£ö | Γ£ö \* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö \* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö | | Greek | `el-GR` | | | | Γ£ö | | | Haitian | `fr-HT` | | | | Γ£ö | | | Hebrew | `he-IL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Hindi | `hi-IN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Hungarian | `hu-HU` | | | | Γ£ö | | | Indonesian | `id-ID` | | | | Γ£ö | |-| Italian | `it-IT` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö | -| Japanese | `ja-JP` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö | +| Italian | `it-IT` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö | +| Japanese | `ja-JP` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö | | Kiswahili | `sw-KE` | | | | Γ£ö | | | Korean | `ko-KR` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Latvian | `lv-LV` | | | | Γ£ö | | This section describes languages supported by Azure Video Indexer API. | Norwegian | `nb-NO` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Persian | `fa-IR` | Γ£ö | | | Γ£ö | Γ£ö | | Polish | `pl-PL` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |-| Portuguese | `pt-BR` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö | +| Portuguese | `pt-BR` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö | | Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Romanian | `ro-RO` | | | | Γ£ö | |-| Russian | `ru-RU` | Γ£ö | Γ£ö\* | Γ£ö | Γ£ö | Γ£ö | +| Russian | `ru-RU` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid) | Γ£ö | Γ£ö | Γ£ö | | Samoan | `en-WS` | | | | Γ£ö | | | Serbian (Cyrillic) | `sr-Cyrl-RS` | | | | Γ£ö | | | Serbian (Latin) | `sr-Latn-RS` | | | | Γ£ö | | | Slovak | `sk-SK` | | | | Γ£ö | |-| Slovenian | `sl-SI` | | | | Γ£ö | | -| Spanish | `es-ES` | Γ£ö | Γ£ö\* | Γ£ö\* | Γ£ö | Γ£ö | +| Slovenian as default languages, w | `sl-SI` | | | | Γ£ö | +| Spanish | `es-ES` | Γ£ö | Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö\* <br/>[Change default languages supported by LID and MLID](#change-default-languages-supported-by-lid-and-mlid)| Γ£ö | Γ£ö | | Spanish (Mexico) | `es-MX` | Γ£ö | | | Γ£ö | Γ£ö | | Swedish | `sv-SE` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | Tamil | `ta-IN` | | | | Γ£ö | | This section describes languages supported by Azure Video Indexer API. | Urdu | `ur-PK` | | | | Γ£ö | | | Vietnamese | `vi-VN` | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | -\*By default, languages marked with * (in the table above) are supported by language identification (LID) or/and multi-language identification (MLID) auto-detection. When [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with an API, you can specify to use other supported languages, from the table above, by using `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID. +### Change default languages supported by LID and MLID ++Languages marked with * (in the table above) are used as default when auto-detecting languages by LID or/and MLID. You can specify to use other supported languages (listed in the table above) as default languages, when [uploading a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) with an API and passing the `customLanguages` parameter. The `customLanguages` parameter allows up to 10 languages to be identified by LID or MLID. > [!NOTE]-> To change the default languages to auto-detect one or more languages by LID or MLID, set the `customLanguages` parameter. +> To change the default languages that you want for LID or MLID to use when auto-detecting, call [upload a video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video) and set the `customLanguages` parameter. ## Language support in frontend experiences |
azure-video-indexer | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md | You can now edit the name of the speakers in the transcription using the Azure V ### Word level time annotation with confidence score -An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file. - Now supporting word level time annotation with confidence score. +An annotation is any type of additional information that is added to an already existing text, be it a transcription of an audio file or an original text file. + ### Azure Monitor integration enabling indexing logs The new set of logs, described below, enables you to better monitor your indexing pipeline. |
azure-vmware | Deploy Disaster Recovery Using Jetstream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md | -[JetStream DR](https://www.jetstreamsoft.com/product-portfolio/jetstream-dr/) is a cloud-native disaster recovery solution designed to minimize downtime of virtual machines (VMs) if there was a disaster. Instances of JetStream DR are deployed at both the protected and recovery sites. +[JetStream DR](https://www.jetstreamsoft.com/product-portfolio/jetstream-dr/) is a cloud-native disaster recovery solution designed to minimize downtime of virtual machines (VMs) if there is a disaster. Instances of JetStream DR are deployed at both the protected and recovery sites. JetStream is built on the foundation of Continuous Data Protection (CDP), using [VMware vSphere API for I/O filtering (VAIO) framework](https://core.vmware.com/resource/vmware-vsphere-apis-io-filtering-vaio), which enables minimal or close to no data loss. JetStream DR provides the level of protection wanted for business and mission-critical applications. It also enables cost-effective DR by using minimal resources at the DR site and using cost-effective cloud storage, such as [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/). To learn more about JetStream DR, see: | Items | Description | | | |-| **JetStream Management Server Virtual Appliance (MSA)** | MSA enables both Day 0 and Day 2 configuration, such as primary sites, protection domains, and recovering VMs. MSA is installed on a vSphere node by the cloud admin. The MSA implements a vCenter Server plugin that allows you to manage JetStream DR natively from vCenter Server. The MSA doesn't handle replication data of protected VMs. | -| **JetStream DR Virtual Appliance (DRVA)** | Linux-based Virtual Machine appliance receives protected VMs replication data from the source ESXi host. It's responsible for storing the replication data at the DR site, typically in an object store such as Azure Blob Storage. Depending on the number of protected VMs and the amount of storage to replicate, the private cloudadmin can create one or more DRVA instances. | -| **JetStream ESXi host components (IO Filter packages)** | JetStream software installed on each ESXi host configured for JetStream DR. The host driver intercepts the vSphere VMs IO and sends the replication data to the DRVA. | -| **JetStream protection domain** | Logical group of VMs that will be protected together using the same policies and run book. The data for all VMs in a protection domain is stored in the same Azure Blob container instance. The same DRVA instance handles replication to remote DR storage for all VMs in a protection domain. | -| **Azure Blob Storage containers** | The protected VMs replicated data is stored in Azure Blobs. JetStream software creates one Azure Blob container instance for each JetStream protection domain. | +| **JetStream Management Server Virtual Appliance (MSA)** | MSA enables both Day 0 and Day 2 configuration, such as primary sites, protection domains, and recovering VMs. The MSA is deployed from an OVA on a vSphere node by the cloud admin. The MSA collects and maintains statistics relevant to VM protection and implements a vCenter plugin that allows you to manage JetStream DR natively with the vSphere Client. The MSA doesn't handle replication data of protected VMs. | +| **JetStream DR Virtual Appliance (DRVA)** | Linux-based Virtual Machine appliance receives protected VMs replication data from the source ESXi host. It maintains the replication log and manages the transfer of the VMs and their data to the object store such as Azure Blob Storage. Depending on the number of protected VMs and the amount of VM data to replicate, the private cloud admin can create one or more DRVA instances. | +| **JetStream ESXi host components (IO Filter packages)** | JetStream software installed on each ESXi host configured for JetStream DR. The host driver intercepts the vSphere VMs IO and sends the replication data to the DRVA. The IO filters also monitor relevant events, such as vMotion, Storage vMotion, snapshots, etc. | +| **JetStream Protected Domain** | Logical group of VMs that will be protected together using the same policies and runbook. The data for all VMs in a protection domain is stored in the same Azure Blob container instance. A single DRVA instance handles replication to remote DR storage for all VMs in a Protected Domain. | +| **Azure Blob Storage containers** | The protected VMs replicated data is stored in Azure Blobs. JetStream software creates one Azure Blob container instance for each JetStream Protected Domain. | To install JetStream DR in the on-premises data center and in the Azure VMware S - Configure the cluster with the IO filter package (install JetStream VIB). - Provision Azure Blob (Azure Storage Account) in the same region as the DR Azure VMware Solution cluster. - Deploy the disaster recovery virtual appliance (DRVA) and assign a replication log volume (VMDK from existing datastore or shared iSCSI storage). - - Create protected domains (groups of related VMs) and assign DRVAs and the Azure Blob Storage/ANF. + - Create Protected Domains (groups of related VMs) and assign DRVAs and the Azure Blob Storage/ANF. - Start protection. - Install JetStream DR in the Azure VMware Solution private cloud: |
backup | Backup Support Matrix Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md | Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 10/31/2022 Last updated : 11/01/2022 Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB Storage type | Standard HDD, Standard SSD, Premium SSD. <br><br> Backup and restore of [ZRS disks](../virtual-machines/disks-redundancy.md#zone-redundant-storage-for-managed-disks) is supported. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup. <br><br> You can back up and restore disks encrypted using platform-managed keys (PMKs) or customer-managed keys (CMKs). You can also assign a disk-encryption set while restoring in the same region (that is providing disk-encryption set while performing cross-region restore is currently not supported, however, you can assign the DES to the restored disk after the restore is complete).-Disks with Write Accelerator enabled | Azure VM with WA disk backup is available in all Azure public regions starting from May 18, 2020. If WA disk backup is not required as part of VM backup, you can choose to remove with [**Selective disk** feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup). +Disks with Write Accelerator enabled | Azure VM with WA disk backup is available in all Azure public regions starting from May 18, 2022. If WA disk backup is not required as part of VM backup, you can choose to remove with [**Selective disk** feature](selective-disk-backup-restore.md). <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup). Disks enabled for access with private EndPoint | Unsupported. Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported. |
backup | Geo Code List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/geo-code-list.md | Title: Geo-code mapping description: Learn about geo-codes mapped with the respective regions. Previously updated : 03/07/2022 Last updated : 11/01/2022+++ # Geo-code mapping This sample XML provides you an insight about the geo-codes mapped with the respective regions. Use these geo-codes to create and add custom DNS zones for private endpoint for Recovery Services vault. +## Fetch mapping details ++To fetch the geo-code mapping list, run the following command: ++```azurecli-interactive + az cli list-locations +``` + ## Mapping details ```xml |
batch | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md | Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 12/13/2021 Last updated : 10/31/2022 This article discusses best practices and useful tips for using the Azure Batch ### Pool configuration and naming -- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode).+- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode). -- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':** While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).+- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':** While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools don't support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md). -- **Job and task run time considerations:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.+- **Job and task run time considerations:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job isn't long, don't allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job. -- **Multiple compute nodes:** Individual nodes are not guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes.+- **Multiple compute nodes:** Individual nodes aren't guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes. -- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It is your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with.+- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with. -- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can do this by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.+- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can create uniqueness by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource. -- **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that jobs won't run if something goes wrong with the pool. This is especially important for time-sensitive workloads. To fix this, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.+- **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that jobs won't run if something goes wrong with the pool. This principle is especially important for time-sensitive workloads. For example, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool. -- **Business continuity during pool maintenance and failure:** There are many reasons why a pool may not grow to the size you desire, such as internal errors or capacity constraints. Make sure you can retarget jobs at a different pool (possibly with a different VM size; Batch supports this via [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid relying on a static pool ID with the expectation that it will never be deleted and never change.+- **Business continuity during pool maintenance and failure:** There are many reasons why a pool may not grow to the size you desire, such as internal errors or capacity constraints. Make sure you can retarget jobs at a different pool (possibly with a different VM size using [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid relying on a static pool ID with the expectation that it will never be deleted and never change. ### Pool security #### Isolation boundary -For the purposes of isolation, if your scenario requires isolating jobs or tasks from each other, do so by having them in separate pools. A pool is the security isolation boundary in Batch, and by default, two pools are not visible or able to communicate with each other. Avoid using separate Batch accounts as a means of security isolation unless the larger environment from which the Batch account operates in requires isolation. +For the purposes of isolation, if your scenario requires isolating jobs or tasks from each other, do so by having them in separate pools. A pool is the security isolation boundary in Batch, and by default, two pools aren't visible or able to communicate with each other. Avoid using separate Batch accounts as a means of security isolation unless the larger environment from which the Batch account operates in requires isolation. #### Batch Node Agent updates -Batch node agents are not automatically upgraded for pools that have non-zero compute nodes. In order to ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It is recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions and when they were released so that you can plan to upgrade to the latest agent version. +Batch node agents aren't automatically upgraded for pools that have non-zero compute nodes. To ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It's recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions. Checking regularly for updates when they were released enables you to plan upgrades to the latest agent version. -Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you are experiencing issues with your Batch pool or compute nodes, as discussed in the [Nodes](#nodes) section. +Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you're experiencing issues with your Batch pool or compute nodes. This is further discussed in the [Nodes](#nodes) section. > [!NOTE] > For general guidance about security in Azure Batch, see [Batch security and compliance best practices](security-best-practices.md). Pool lifetime can vary depending upon the method of allocation and options appli - **Pool recreation:** Avoid deleting and recreating pools on a daily basis. Instead, create a new pool and then update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool. -- **Pool efficiency and billing:** Batch itself incurs no extra charges, but you do incur charges for Azure resources that are utilized, such as compute, storage, networking and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it is in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).+- **Pool efficiency and billing:** Batch itself incurs no extra charges. However, you do incur charges for Azure resources utilized, such as compute, storage, networking and any other resources that may be required for your Batch workload. You're billed for every compute node in the pool, regardless of the state it's in. For more information, see [Cost analysis and budgets for Azure Batch](budget.md). - **Ephemeral OS disks:** Virtual Machine Configuration pools can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks. ### Pool allocation failures -Pool allocation failures can happen at any point during first allocation or subsequent resizes. This can be due to temporary capacity exhaustion in a region or failures in other Azure services that Batch relies on. Your core quota is not a guarantee but rather a limit. +Pool allocation failures can happen at any point during first allocation or subsequent resizes. These failures can be due to temporary capacity exhaustion in a region or failures in other Azure services that Batch relies on. Your core quota isn't a guarantee but rather a limit. ### Unplanned downtime -It's possible for Batch pools to experience downtime events in Azure. Keep this in mind when planning and developing your scenario or workflow for Batch. If nodes fail, Batch automatically attempts to recover these compute nodes on your behalf. This may trigger rescheduling any running task on the node that is recovered. To learn more about interrupted tasks, see [Designing for retries](#design-for-retries-and-re-execution). +It's possible for Batch pools to experience downtime events in Azure. Understanding that problems can arise and you should develop your workflow to be resilient to re-executions. If nodes fail, Batch automatically attempts to recover these compute nodes on your behalf. This recovery may trigger rescheduling any running task on the node that is restored or on a different, available node. To learn more about interrupted tasks, see [Designing for retries](#design-for-retries-and-re-execution). ### Custom image pools -When you create an Azure Batch pool using the Virtual Machine Configuration, you specify a VM image that provides the operating system for each compute node in the pool. You can create the pool with a supported Azure Marketplace image, or you can [create a custom image with an Azure Compute Gallery image](batch-sig-images.md). While you can also use a [managed image](batch-custom-images.md) to create a custom image pool, we recommend creating custom images using the Azure Compute Gallery whenever possible. Using the Azure Compute Gallery helps you provision pools faster, scale larger quantities of VMs, and improve reliability when provisioning VMs. +When you create an Azure Batch pool using the Virtual Machine Configuration, you specify a VM image that provides the operating system for each compute node in the pool. You can create the pool with a supported Azure Marketplace image, or you can [create a custom image with an Azure Compute Gallery image](batch-sig-images.md). While you can also use a [managed image](batch-custom-images.md) to create a custom image pool, we recommend creating custom images using the Azure Compute Gallery whenever possible. Using the Azure Compute Gallery helps you provision pools faster, scale larger quantities of VMs, and improves reliability when provisioning VMs. ### Third-party images A [job](jobs-and-tasks.md#jobs) is a container designed to contain hundreds, tho ### Fewer jobs, more tasks -Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1000 tasks rather than creating 100 jobs that contain 10 tasks each. Running 1000 jobs, each with a single task, would be the least efficient, slowest, and most expensive approach to take. +Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1000 tasks rather than creating 100 jobs that contain 10 tasks each. If you used 1000 jobs, each with a single task that would be the least efficient, slowest, and most expensive approach to take. -Because of this, avoid designing a Batch solution that requires thousands of simultaneously active jobs. There is no quota for tasks, so executing many tasks under as few jobs as possible efficiently uses your [job and job schedule quotas](batch-quota-limit.md#resource-quotas). +Avoid designing a Batch solution that requires thousands of simultaneously active jobs. There's no quota for tasks, so executing many tasks under as few jobs as possible efficiently uses your [job and job schedule quotas](batch-quota-limit.md#resource-quotas). ### Job lifetime A Batch job has an indefinite lifetime until it's deleted from the system. Its state designates whether it can accept more tasks for scheduling or not. -A job does not automatically move to completed state unless explicitly terminated. This can be automatically triggered through the [onAllTasksComplete](/dotnet/api/microsoft.azure.batch.common.onalltaskscomplete) property or [maxWallClockTime](/rest/api/batchservice/job/add#jobconstraints). +A job doesn't automatically move to completed state unless explicitly terminated. This action can be automatically triggered through the [onAllTasksComplete](/dotnet/api/microsoft.azure.batch.common.onalltaskscomplete) property or [maxWallClockTime](/rest/api/batchservice/job/add#jobconstraints). -There is a default [active job and job schedule quota](batch-quota-limit.md#resource-quotas). Jobs and job schedules in completed state do not count towards this quota. +There's a default [active job and job schedule quota](batch-quota-limit.md#resource-quotas). Jobs and job schedules in completed state don't count towards this quota. ## Tasks There is a default [active job and job schedule quota](batch-quota-limit.md#reso ### Save task data -Compute nodes are by their nature ephemeral. Batch features such as [autopool](nodes-and-pools.md#autopools) and [autoscale](nodes-and-pools.md#automatic-scaling-policy) can make it easy for nodes to disappear. When nodes leave a pool (due to a resize or a pool delete), all the files on those nodes are also deleted. Because of this, a task should move its output off of the node it is running on and to a durable store before it completes. Similarly, if a task fails, it should move logs required to diagnose the failure to a durable store. +Compute nodes are by their nature ephemeral. Batch features such as [autopool](nodes-and-pools.md#autopools) and [autoscale](nodes-and-pools.md#automatic-scaling-policy) can make it easy for nodes to disappear. When nodes leave a pool (due to a resize or a pool delete), all the files on those nodes are also deleted. Because of this behavior, a task should move its output off of the node it's running on, and to a durable store before it completes. Similarly, if a task fails, it should move logs required to diagnose the failure to a durable store. -Batch has integrated support Azure Storage to upload data via [OutputFiles](batch-task-output-files.md), as well as a variety of shared file systems, or you can perform the upload yourself in your tasks. +Batch has integrated support Azure Storage to upload data via [OutputFiles](batch-task-output-files.md), and with various shared file systems, or you can perform the upload yourself in your tasks. ### Manage task lifetime -Delete tasks when they are no longer needed, or set a [retentionTime](/dotnet/api/microsoft.azure.batch.taskconstraints.retentiontime) task constraint. If a `retentionTime` is set, Batch automatically cleans up the disk space used by the task when the `retentionTime` expires. +Delete tasks when they're no longer needed, or set a [retentionTime](/dotnet/api/microsoft.azure.batch.taskconstraints.retentiontime) task constraint. If a `retentionTime` is set, Batch automatically cleans up the disk space used by the task when the `retentionTime` expires. -Deleting tasks accomplishes two things. It ensures that you do not have a build-up of tasks in the job, which can make it harder to query/find the task you're interested in (because you'll have to filter through the Completed tasks). It also cleans up the corresponding task data on the node (provided `retentionTime` has not already been hit). This helps ensure that your nodes don't fill up with task data and run out of disk space. +Deleting tasks accomplishes two things: ++- Ensures that you don't have a build-up of tasks in the job. This action will help avoid difficulty in finding the task you're interested in as you'll have to filter through the Completed tasks. +- Cleans up the corresponding task data on the node (provided `retentionTime` hasn't already been hit). This action helps ensure that your nodes don't fill up with task data and run out of disk space. ### Submit large numbers of tasks in collection Tasks can be submitted on an individual basis or in collections. Submit tasks in ### Set max tasks per node appropriately -Batch supports oversubscribing tasks on nodes (running more tasks than a node has cores). It's up to you to ensure that your tasks "fit" into the nodes in your pool. For example, you may have a degraded experience if you attempt to schedule eight tasks that each consume 25% CPU usage onto one node (in a pool with `taskSlotsPerNode = 8`). +Batch supports oversubscribing tasks on nodes (running more tasks than a node has cores). It's up to you to ensure that your tasks are right-sized for the nodes in your pool. For example, you may have a degraded experience if you attempt to schedule eight tasks that each consume 25% CPU usage onto one node (in a pool with `taskSlotsPerNode = 8`). ### Design for retries and re-execution There are no design differences when executing your tasks on dedicated or [Spot ### Build durable tasks -Tasks should be designed to withstand failure and accommodate retry. This is especially important for long running tasks. To do this, ensure tasks generate the |